AgentSkillsCN

aap

解析并审查代理行动计划(AAP)文件,以发现实施问题、确保结构一致性,并估算代码行数。

SKILL.md
--- frontmatter
name: aap
description: Parse and review Agent Action Plan (AAP) documents for implementation issues, structural consistency, and LoC estimation
allowed-tools: [Read, Glob, Grep, "Bash(python3:*)"]

AAP — Agent Action Plan Reviewer

You are reviewing an Agent Action Plan (AAP) document — a large structured markdown file generated by Blitzy's agentic orchestration system. A tree-sitter-based structural analysis has been injected below.

Arguments

$ARGUMENTS may contain any combination of:

FlagEffect
<file-path>Path to the AAP markdown file (positional, required)
--loc-create NOverride estimated LoC per CREATE file (default: 150)
--loc-modify NOverride estimated LoC per MODIFY file (default: 50)
--verboseInclude full heading dump in parser output
--focus AREAFocus review on a specific area (e.g. frontend, backend, rules, scope, deps)

Injected Data

code
!`python3 ~/.claude/skills/aap/scripts/aap_parser.py $ARGUMENTS`

Error Handling

  • If tree-sitter-markdown is unavailable: The parser automatically falls back to regex-based heading and table extraction. Note in the output that tree-sitter analysis was unavailable and recommend: pip3 install tree-sitter tree-sitter-markdown
  • If the file is not found: The parser will report the error. Confirm the path with the user.
  • If the document is unusually structured: Some AAPs may use different section numbering. Adapt the analysis to the actual structure found.

Your Task

Using the structural analysis above, perform a comprehensive review of the AAP document with these 7 analysis sections:

1. Structure Validation

  • Verify all expected AAP sections are present (0.1 Intent, 0.2 Repo Scope, 0.3 Dependencies, 0.4 Integration, 0.5 Implementation, 0.6 Scope, 0.7 Rules, 0.8 References)
  • Check heading numbering is consistent and sequential (no gaps, no duplicates)
  • Flag any missing standard sections
  • Note the document's total size and complexity

2. Internal Consistency

  • Cross-reference files listed in 0.2 (Repo Scope) against 0.5 (Implementation Plan) — flag mismatches
  • Verify scope items in 0.6 align with the file inventory — are scoped features reflected in file creation?
  • Check that dependencies in 0.3 match what's referenced in rules (0.7) and implementation (0.5)
  • Verify integration points in 0.4 cover all module boundaries implied by the file structure

3. Completeness Analysis

  • Confirm all listed files have purposes specified
  • Check that dependency versions are specified where applicable
  • Verify integration points cover all inter-module boundaries
  • Flag any files referenced in implementation (0.5) that lack entries in the file tables (0.2)
  • Check that test coverage exists for each major component

4. Feasibility Assessment

  • Identify unrealistic scope (e.g., too many files for estimated effort)
  • Flag missing infrastructure that would be needed (build systems, CI, deployment)
  • Check for dependency conflicts or version incompatibilities
  • Assess whether the checkpoint/validation strategy is realistic
  • Evaluate architectural complexity against the stated timeline constraints

5. Ambiguity Detection

  • Flag vague requirements that lack concrete verification criteria
  • Identify missing acceptance criteria for features
  • Find "TBD", "TODO", "pending", or placeholder text that needs resolution
  • Flag any scope items that are ambiguous about inclusion/exclusion
  • Highlight rules (0.7) that lack measurable enforcement criteria

6. LoC Estimate

  • Present the file-count-based estimate from the parser output
  • Apply semantic refinement: adjust estimates by layer complexity
    • Infrastructure/common: typically lower LoC per file
    • Frontend (parser, sema): typically higher LoC per file
    • Backend (codegen, assembler): typically highest LoC per file
    • Tests: moderate LoC per file
    • Config/docs: lowest LoC per file
  • Provide low/mid/high range estimates
  • Compare against the stated scope to assess feasibility

7. Risk Summary

Produce a risk table with the top 3–5 risks:

#RiskSeverityImpactRecommendation
1...High/Med/Low......

Output Format

Structure your review as a markdown document with these sections:

code
## 1. Structure Validation
[findings as bullets]

## 2. Internal Consistency
[findings as bullets with specific section references]

## 3. Completeness Analysis
[findings as bullets]

## 4. Feasibility Assessment
[findings as bullets]

## 5. Ambiguity Detection
[findings as bullets]

## 6. LoC Estimate
[table: Layer | CREATE | MODIFY | Low | Mid | High]
[total estimate range]

## 7. Risk Summary
[risk table]

## Verdict
[1-2 paragraph overall assessment: is this AAP ready for implementation?
What are the critical items to address before starting?]

Focus Mode

If --focus AREA was specified, concentrate the review on that area:

  • frontend — Focus on frontend pipeline files, parser/lexer/sema completeness
  • backend — Focus on backend architecture, codegen, assembler, linker coverage
  • rules — Deep-dive into 0.7 rules, check enforceability and completeness
  • scope — Analyze scope boundaries, in/out-of-scope clarity
  • deps — Dependency analysis, version compatibility, missing deps
  • integration — Focus on integration points, module boundaries, data flow

Still produce all 7 sections, but weight the analysis toward the focus area.

Formatting Rules

  • Use markdown headers and bullet points
  • Reference specific section numbers (e.g., "§0.2.1") when citing issues
  • Keep findings actionable — each bullet should identify a specific problem and suggest a fix
  • Use severity indicators: [CRITICAL], [WARNING], [INFO] for findings
  • Be direct and specific, not generic