Docs Doctor
Validate ORC documentation against code reality using parallel subagent checks.
Note: This is an ORC-specific skill. It validates ORC's documentation structure and should only be used in the ORC repository.
Usage
code
/docs-doctor (run all checks) /docs-doctor --fix (auto-fix simple issues)
When to Use
- •Before merging to master (recommended in CLAUDE.md)
- •After adding/removing documentation files
- •After changing CLI commands or flags
- •After modifying database schema (ER diagram)
- •After changing the Makefile bootstrap target,
orc hellocommand, or orc-first-run skill
Check Categories
1. Structural Checks
- •Internal markdown links in docs/**/*.md are valid
- •No broken cross-references between documentation files
2. Lane Checks
- •README.md contains no agent instructions (CLAUDE.md's job)
- •CLAUDE.md contains no human onboarding content (README's job)
- •Each doc stays in its designated purpose
3. Behavioral Checks (Shallow)
- •CLI commands referenced anywhere (docs, skills, Makefile) exist (
orc <cmd> --helpsucceeds) - •CLI flags used anywhere match actual
--helpoutput for that command - •Referenced skills exist in .claude/skills/ or glue/skills/
4. Schema Checks
- •Required directory exists:
docs/dev/ - •No unexpected subdirectories under
docs/(onlydev/allowed) - •Required files present per
docs/README.mdschema - •Top-level
.mdfiles indocs/are allowed (listed in schema) - •No orphan files outside the defined schema
5. Getting Started Coherence Check
- •
docs/getting-started.mddescribes a phased setup flow backed by real code - •Phase 2 claims (what
make bootstrapdoes) match the Makefilebootstraptarget - •Phase 3 claims (what
orc hellodoes) matchinternal/cli/hello_cmd.goandglue/skills/orc-first-run/SKILL.md - •CLI flags used in examples and skills match actual
--helpoutput (e.g.,--pathnot--local-path) - •The guide's description of what each phase creates/leaves behind matches code reality
6. ER Diagram Check
- •ER diagram in docs/architecture.md represents core tables from internal/db/schema.sql
Note: Diagrams are intentionally simplified. Validation checks that diagram states/tables are a subset of code reality, not an exact match.
Architecture
This skill uses a fan-out pattern with haiku subagents for parallel validation:
code
Main Agent (opus)
├── Spawn: Structural Check Agent (haiku)
├── Spawn: Lane Check Agent (haiku)
├── Spawn: CLI Validation Agent (haiku)
├── Spawn: Schema Validation Agent (haiku)
├── Spawn: Getting Started Coherence Agent (haiku)
└── Spawn: ER Diagram Agent (haiku)
↓
Collect findings
↓
Synthesize report
↓
Auto-fix OR escalate
Flow
Step 1: Spawn Structural Check Agent
Use the Task tool with subagent_type: "Explore" and model: "haiku":
code
Prompt: "Check internal documentation links.
1. Use Glob to find all .md files in docs/
2. For each file, extract internal markdown links (links to other docs/**/*.md files)
3. Verify each linked file exists
Return findings as:
- broken_links: [{file, line, target}]
- status: 'pass' or 'fail'"
Step 2: Spawn Lane Check Agent
code
Prompt: "Check documentation stays in its designated lane.
1. Read README.md - should NOT contain:
- References to CLAUDE.md behavior
- Agent-specific instructions
- 'Run orc prime' type commands
2. Read CLAUDE.md - should NOT contain:
- Marketing language
- Human onboarding ('Welcome to ORC!')
- Installation instructions for humans
Return findings as:
- readme_violations: [list of lines/content]
- claudemd_violations: [list of lines/content]
- status: 'pass' or 'fail'"
Step 3: Spawn CLI Validation Agent
code
Prompt: "Validate CLI commands referenced in documentation.
1. Scan all sources that reference orc CLI commands:
- docs/**/*.md
- glue/skills/**/*.md
- .claude/skills/**/*.md
- Makefile (the bootstrap target and any other targets that call orc)
2. Extract all 'orc <command>' references and any --flags used with them
3. For each unique command, run 'orc <command> --help' to verify it exists
4. For each flag, verify it appears in the command's --help output
Return findings as:
- invalid_commands: [{file, line, command}]
- invalid_flags: [{file, line, command, flag_used, valid_flags}]
- status: 'pass' or 'fail'"
Step 4: Spawn Schema Validation Agent
code
Prompt: "Validate docs directory structure against the docs schema.
1. Read docs/README.md to understand the expected structure
2. Use Glob to find all .md files under docs/
3. Check:
a. Required directory exists: docs/dev/
b. No unexpected subdirectories under docs/ (only dev/ allowed)
c. Top-level .md files in docs/ are listed in the schema
d. All files marked 'required: yes' in the schema exist
e. Every .md file found is listed in the schema (no orphans)
Return findings as:
- missing_dirs: [list of missing required directories]
- unexpected_dirs: [subdirectories not in schema]
- missing_required: [{dir, file}]
- orphan_files: [{dir, file}]
- status: 'pass' or 'fail'"
Step 5: Spawn Getting Started Coherence Agent
code
Prompt: "Validate that docs/getting-started.md accurately describes the bootstrap flow.
Cross-reference against three source-of-truth files:
1. Read the Makefile 'bootstrap' target (search for '^bootstrap:')
2. Read internal/cli/hello_cmd.go
3. Read glue/skills/orc-first-run/SKILL.md
4. Read docs/getting-started.md
Then check:
a. Phase 2 claims (what 'make bootstrap' does):
- Every action claimed in the guide has a corresponding line in the Makefile target
- No significant Makefile actions are missing from the guide
- Artifacts claimed (e.g. FACT-001, REPO-001, directories) match what the Makefile creates
- CLI flags in examples match actual flag names (check against 'orc repo create --help', etc.)
b. Phase 3 claims (what 'orc hello' does):
- Guide's description matches what hello_cmd.go actually does (launches Claude with directive)
- Claims about what the first-run skill creates match glue/skills/orc-first-run/SKILL.md
- Entity types mentioned (commission, workshop, workbench, shipment) match the skill's flow
c. Phase ordering:
- Guide presents phases in correct dependency order
- Each phase's 'what you have after' description is consistent with the next phase's starting assumptions
d. Cross-file flag consistency:
- CLI flags used in getting-started.md match flags used in orc-first-run SKILL.md
- Both match actual --help output for those commands
Return findings as:
- phase2_mismatches: [{claim, reality}]
- phase3_mismatches: [{claim, reality}]
- ordering_issues: [list]
- flag_mismatches: [{file, flag_used, correct_flag}]
- status: 'pass' or 'fail'"
Step 6: Spawn ER Diagram Agent
code
Prompt: "Validate ER diagram represents database schema. 1. Read internal/db/schema.sql 2. Read docs/architecture.md, find the erDiagram mermaid block 3. Check that tables shown in diagram exist in schema (subset validation) 4. Check that relationships shown are accurate Note: Diagram is intentionally simplified - not all tables need to be shown. Return findings as: - invalid_tables: [tables in diagram but not in schema] - incorrect_relationships: [list] - status: 'pass' or 'fail'"
Step 7: Synthesize Findings
Collect all agent results and categorize:
Auto-fixable:
- •Simple broken links (update path)
Requires Judgment:
- •Lane violations (content in wrong file)
- •Diagram mismatches (which is correct?)
- •Invalid commands (docs wrong or code wrong?)
Step 8: Report and Act
If all pass:
code
Docs Doctor: All checks passed Structural: pass Lanes: pass CLI: pass Schema: pass Getting Started: pass ER Diagram: pass
If issues found:
code
Docs Doctor: Issues found Structural: - Broken link: common-workflows.md:45 -> nonexistent.md Lanes: - README.md:23 contains agent instruction CLI: - docs/troubleshooting.md references 'orc foobar' (doesn't exist) Auto-fixing: [list what will be fixed] Escalating: [list what needs human decision]
With --fix flag:
- •Auto-fix simple issues (fix paths)
- •For judgment calls, run
/orc-interviewto get human decision
Enforcement
This is a soft gate:
- •Runs on request (not automatic)
- •Issues warnings but doesn't block commits
- •Documented in CLAUDE.md as pre-merge check
Notes
- •Haiku subagents are cheap and fast for parallel validation
- •Each agent has narrow focus, returns structured output
- •Main agent orchestrates and synthesizes
- •Simple fixes auto-applied, complex decisions escalated
- •Diagrams use subset validation, not exact matching