Every execution produces a SUMMARY.md for quick human scanning without reading full outputs.
Each prompt gets its own folder in .prompts/ with its output artifacts, enabling clear provenance and chain detection.
</objective>
<quick_start> <workflow>
- •Intake: Determine purpose (Do/Plan/Research/Refine), gather requirements
- •Chain detection: Check for existing research/plan files to reference
- •Generate: Create prompt using purpose-specific patterns
- •Save: Create folder in
.prompts/{number}-{topic}-{purpose}/ - •Present: Show decision tree for running
- •Execute: Run prompt(s) with dependency-aware execution engine
- •Summarize: Create SUMMARY.md for human scanning </workflow>
<folder_structure>
.prompts/ ├── 001-auth-research/ │ ├── completed/ │ │ └── 001-auth-research.md # Prompt (archived after run) │ ├── auth-research.md # Full output (XML for Claude) │ └── SUMMARY.md # Executive summary (markdown for human) ├── 002-auth-plan/ │ ├── completed/ │ │ └── 002-auth-plan.md │ ├── auth-plan.md │ └── SUMMARY.md ├── 003-auth-implement/ │ ├── completed/ │ │ └── 003-auth-implement.md │ └── SUMMARY.md # Do prompts create code elsewhere ├── 004-auth-research-refine/ │ ├── completed/ │ │ └── 004-auth-research-refine.md │ ├── archive/ │ │ └── auth-research-v1.md # Previous version │ └── SUMMARY.md
</folder_structure> </quick_start>
<context> Prompts directory: !`[ -d ./.prompts ] && echo "exists" || echo "missing"` Existing research/plans: !`find ./.prompts -name "*-research.md" -o -name "*-plan.md" 2>/dev/null | head -10` Next prompt number: !`ls -d ./.prompts/*/ 2>/dev/null | wc -l | xargs -I {} expr {} + 1` </context><automated_workflow>
<step_0_intake_gate>
<title>Adaptive Requirements Gathering</title><critical_first_action> BEFORE analyzing anything, check if context was provided.
IF no context provided (skill invoked without description): → IMMEDIATELY use AskUserQuestion with:
- •header: "Purpose"
- •question: "What is the purpose of this prompt?"
- •options:
- •"Do" - Execute a task, produce an artifact
- •"Plan" - Create an approach, roadmap, or strategy
- •"Research" - Gather information or understand something
- •"Refine" - Improve an existing research or plan output
After selection, ask: "Describe what you want to accomplish" (they select "Other" to provide free text).
IF context was provided: → Check if purpose is inferable from keywords:
- •
implement,build,create,fix,add,refactor→ Do - •
plan,roadmap,approach,strategy,decide,phases→ Plan - •
research,understand,learn,gather,analyze,explore→ Research - •
refine,improve,deepen,expand,iterate,update→ Refine
→ If unclear, ask the Purpose question above as first contextual question → If clear, proceed to adaptive_analysis with inferred purpose </critical_first_action>
<adaptive_analysis> Extract and infer:
- •Purpose: Do, Plan, Research, or Refine
- •Topic identifier: Kebab-case identifier for file naming (e.g.,
auth,stripe-payments) - •Complexity: Simple vs complex (affects prompt depth)
- •Prompt structure: Single vs multiple prompts
- •Target (Refine only): Which existing output to improve
If topic identifier not obvious, ask:
- •header: "Topic"
- •question: "What topic/feature is this for? (used for file naming)"
- •Let user provide via "Other" option
- •Enforce kebab-case (convert spaces/underscores to hyphens)
For Refine purpose, also identify target output from .prompts/*/ to improve.
</adaptive_analysis>
<chain_detection>
Scan .prompts/*/ for existing *-research.md and *-plan.md files.
If found:
- •List them: "Found existing files: auth-research.md (in 001-auth-research/), stripe-plan.md (in 005-stripe-plan/)"
- •Use AskUserQuestion:
- •header: "Reference"
- •question: "Should this prompt reference any existing research or plans?"
- •options: List found files + "None"
- •multiSelect: true
Match by topic keyword when possible (e.g., "auth plan" → suggest auth-research.md). </chain_detection>
<contextual_questioning> Generate 2-4 questions using AskUserQuestion based on purpose and gaps.
Load questions from: references/question-bank.md
Route by purpose:
- •Do → artifact type, scope, approach
- •Plan → plan purpose, format, constraints
- •Research → depth, sources, output format
- •Refine → target selection, feedback, preservation </contextual_questioning>
<decision_gate> After receiving answers, present decision gate using AskUserQuestion:
- •header: "Ready"
- •question: "Ready to create the prompt?"
- •options:
- •"Proceed" - Create the prompt with current context
- •"Ask more questions" - I have more details to clarify
- •"Let me add context" - I want to provide additional information
Loop until "Proceed" selected. </decision_gate>
<finalization> After "Proceed" selected, state confirmation:"Creating a {purpose} prompt for: {topic} Folder: .prompts/{number}-{topic}-{purpose}/ References: {list any chained files}"
Then proceed to generation. </finalization> </step_0_intake_gate>
<step_1_generate>
<title>Generate Prompt</title>Load purpose-specific patterns:
- •Do: references/do-patterns.md
- •Plan: references/plan-patterns.md
- •Research: references/research-patterns.md
- •Refine: references/refine-patterns.md
Load intelligence rules: references/intelligence-rules.md
<prompt_structure> All generated prompts include:
- •Objective: What to accomplish, why it matters
- •Context: Referenced files (@), dynamic context (!)
- •Requirements: Specific instructions for the task
- •Output specification: Where to save, what structure
- •Metadata requirements: For research/plan outputs, specify XML metadata structure
- •SUMMARY.md requirement: All prompts must create a SUMMARY.md file
- •Success criteria: How to know it worked
For Research and Plan prompts, output must include:
- •
<confidence>- How confident in findings - •
<dependencies>- What's needed to proceed - •
<open_questions>- What remains uncertain - •
<assumptions>- What was assumed
All prompts must create SUMMARY.md with:
- •One-liner - Substantive description of outcome
- •Version - v1 or iteration info
- •Key Findings - Actionable takeaways
- •Files Created - (Do prompts only)
- •Decisions Needed - What requires user input
- •Blockers - External impediments
- •Next Step - Concrete forward action </prompt_structure>
<file_creation>
- •Create folder:
.prompts/{number}-{topic}-{purpose}/ - •Create
completed/subfolder - •Write prompt to:
.prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md - •Prompt instructs output to:
.prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.md</file_creation> </step_1_generate>
<step_2_present>
<title>Present Decision Tree</title>After saving prompt(s), present inline (not AskUserQuestion):
<single_prompt_presentation>
Prompt created: .prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md
What's next?
1. Run prompt now
2. Review/edit prompt first
3. Save for later
4. Other
Choose (1-4): _
</single_prompt_presentation>
<multi_prompt_presentation>
Prompts created: - .prompts/001-auth-research/001-auth-research.md - .prompts/002-auth-plan/002-auth-plan.md - .prompts/003-auth-implement/003-auth-implement.md Detected execution order: Sequential (002 references 001 output, 003 references 002 output) What's next? 1. Run all prompts (sequential) 2. Review/edit prompts first 3. Save for later 4. Other Choose (1-4): _
</multi_prompt_presentation> </step_2_present>
<step_3_execute>
<title>Execution Engine</title><execution_modes> <single_prompt> Straightforward execution of one prompt.
- •Read prompt file contents
- •Spawn Task agent with subagent_type="general-purpose"
- •Include in task prompt:
- •The complete prompt contents
- •Output location:
.prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.md
- •Wait for completion
- •Validate output (see validation section)
- •Archive prompt to
completed/subfolder - •Report results with next-step options </single_prompt>
<sequential_execution> For chained prompts where each depends on previous output.
- •Build execution queue from dependency order
- •For each prompt in queue: a. Read prompt file b. Spawn Task agent c. Wait for completion d. Validate output e. If validation fails → stop, report failure, offer recovery options f. If success → archive prompt, continue to next
- •Report consolidated results
<progress_reporting> Show progress during execution:
Executing 1/3: 001-auth-research... ✓ Executing 2/3: 002-auth-plan... ✓ Executing 3/3: 003-auth-implement... (running)
</progress_reporting> </sequential_execution>
<parallel_execution> For independent prompts with no dependencies.
- •Read all prompt files
- •CRITICAL: Spawn ALL Task agents in a SINGLE message
- •This is required for true parallel execution
- •Each task includes its output location
- •Wait for all to complete
- •Validate all outputs
- •Archive all prompts
- •Report consolidated results (successes and failures)
<failure_handling> Unlike sequential, parallel continues even if some fail:
- •Collect all results
- •Archive successful prompts
- •Report failures with details
- •Offer to retry failed prompts </failure_handling> </parallel_execution>
<mixed_dependencies> For complex DAGs (e.g., two parallel research → one plan).
- •Analyze dependency graph from @ references
- •Group into execution layers:
- •Layer 1: No dependencies (run parallel)
- •Layer 2: Depends only on layer 1 (run after layer 1 completes)
- •Layer 3: Depends on layer 2, etc.
- •Execute each layer:
- •Parallel within layer
- •Sequential between layers
- •Stop if any dependency fails (downstream prompts can't run)
<dependency_detection> <automatic_detection> Scan prompt contents for @ references to determine dependencies:
- •Parse each prompt for
@.prompts/{number}-{topic}/patterns - •Build dependency graph
- •Detect cycles (error if found)
- •Determine execution order
<inference_rules> If no explicit @ references found, infer from purpose:
- •Research prompts: No dependencies (can parallel)
- •Plan prompts: Depend on same-topic research
- •Do prompts: Depend on same-topic plan
Override with explicit references when present. </inference_rules> </automatic_detection>
<missing_dependencies> If a prompt references output that doesn't exist:
- •Check if it's another prompt in this session (will be created)
- •Check if it exists in
.prompts/*/(already completed) - •If truly missing:
- •Warn user: "002-auth-plan references auth-research.md which doesn't exist"
- •Offer: Create the missing research prompt first? / Continue anyway? / Cancel? </missing_dependencies> </dependency_detection>
- •File exists: Check output file was created
- •Not empty: File has content (> 100 chars)
- •Metadata present (for research/plan): Check for required XML tags
- •
<confidence> - •
<dependencies> - •
<open_questions> - •
<assumptions>
- •
- •SUMMARY.md exists: Check SUMMARY.md was created
- •SUMMARY.md complete: Has required sections (Key Findings, Decisions Needed, Blockers, Next Step)
- •One-liner is substantive: Not generic like "Research completed"
<validation_failure> If validation fails:
- •Report what's missing
- •Offer options:
- •Retry the prompt
- •Continue anyway (for non-critical issues)
- •Stop and investigate </validation_failure> </output_validation> </validation>
<failure_handling> <sequential_failure> Stop the chain immediately:
✗ Failed at 2/3: 002-auth-plan Completed: - 001-auth-research ✓ (archived) Failed: - 002-auth-plan: Output file not created Not started: - 003-auth-implement What's next? 1. Retry 002-auth-plan 2. View error details 3. Stop here (keep completed work) 4. Other
</sequential_failure>
<parallel_failure> Continue others, report all results:
Parallel execution completed with errors: ✓ 001-api-research (archived) ✗ 002-db-research: Validation failed - missing <confidence> tag ✓ 003-ui-research (archived) What's next? 1. Retry failed prompt (002) 2. View error details 3. Continue without 002 4. Other
</parallel_failure> </failure_handling>
<archiving> <archive_timing> - **Sequential**: Archive each prompt immediately after successful completion - Provides clear state if execution stops mid-chain - **Parallel**: Archive all at end after collecting results - Keeps prompts available for potential retry<archive_operation> Move prompt file to completed subfolder:
mv .prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md \
.prompts/{number}-{topic}-{purpose}/completed/
Output file stays in place (not moved). </archive_operation> </archiving>
<result_presentation> <single_result>
✓ Executed: 001-auth-research ✓ Created: .prompts/001-auth-research/SUMMARY.md ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ # Auth Research Summary **JWT with jose library and httpOnly cookies recommended** ## Key Findings • jose outperforms jsonwebtoken with better TypeScript support • httpOnly cookies required (localStorage is XSS vulnerable) • Refresh rotation is OWASP standard ## Decisions Needed None - ready for planning ## Blockers None ## Next Step Create auth-plan.md ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ What's next? 1. Create planning prompt (auth-plan) 2. View full research output 3. Done 4. Other
Display the actual SUMMARY.md content inline so user sees findings without opening files. </single_result>
<chain_result>
✓ Chain completed: auth workflow Results: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 001-auth-research **JWT with jose library and httpOnly cookies recommended** Decisions: None • Blockers: None 002-auth-plan **4-phase implementation: types → JWT core → refresh → tests** Decisions: Approve 15-min token expiry • Blockers: None 003-auth-implement **JWT middleware complete with 6 files created** Decisions: Review before Phase 2 • Blockers: None ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ All prompts archived. Full summaries in .prompts/*/SUMMARY.md What's next? 1. Review implementation 2. Run tests 3. Create new prompt chain 4. Other
For chains, show condensed one-liner from each SUMMARY.md with decisions/blockers flagged. </chain_result> </result_presentation>
<special_cases> <re_running_completed> If user wants to re-run an already-completed prompt:
- •Check if prompt is in
completed/subfolder - •Move it back to parent folder
- •Optionally backup existing output:
{output}.bak - •Execute normally </re_running_completed>
<output_conflicts> If output file already exists:
- •For re-runs: Backup existing →
{filename}.bak - •For new runs: Should not happen (unique numbering)
- •If conflict detected: Ask user - Overwrite? / Rename? / Cancel? </output_conflicts>
<commit_handling> After successful execution:
- •Do NOT auto-commit (user controls git workflow)
- •Mention what files were created/modified
- •User can commit when ready
Exception: If user explicitly requests commit, stage and commit:
- •Output files created
- •Prompts archived
- •Any implementation changes (for Do prompts) </commit_handling>
<recursive_prompts> If a prompt's output includes instructions to create more prompts:
- •This is advanced usage - don't auto-detect
- •Present the output to user
- •User can invoke skill again to create follow-up prompts
- •Maintains user control over prompt creation </recursive_prompts> </special_cases> </step_3_execute>
</automated_workflow>
<reference_guides> Prompt patterns by purpose:
- •references/do-patterns.md - Execution prompts + output structure
- •references/plan-patterns.md - Planning prompts + plan.md structure
- •references/research-patterns.md - Research prompts + research.md structure
- •references/refine-patterns.md - Iteration prompts + versioning
Shared templates:
- •references/summary-template.md - SUMMARY.md structure and field requirements
- •references/metadata-guidelines.md - Confidence, dependencies, open questions, assumptions
Supporting references:
- •references/question-bank.md - Intake questions by purpose
- •references/intelligence-rules.md - Extended thinking, parallel tools, depth decisions </reference_guides>
<success_criteria> Prompt Creation:
- •Intake gate completed with purpose and topic identified
- •Chain detection performed, relevant files referenced
- •Prompt generated with correct structure for purpose
- •Folder created in
.prompts/with correct naming - •Output file location specified in prompt
- •SUMMARY.md requirement included in prompt
- •Metadata requirements included for Research/Plan outputs
- •Quality controls included for Research outputs (verification checklist, QA, pre-submission)
- •Streaming write instructions included for Research outputs
- •Decision tree presented
Execution (if user chooses to run):
- •Dependencies correctly detected and ordered
- •Prompts executed in correct order (sequential/parallel/mixed)
- •Output validated after each completion
- •SUMMARY.md created with all required sections
- •One-liner is substantive (not generic)
- •Failed prompts handled gracefully with recovery options
- •Successful prompts archived to
completed/subfolder - •SUMMARY.md displayed inline in results
- •Results presented with decisions/blockers flagged
Research Quality (for Research prompts):
- •Verification checklist completed
- •Quality report distinguishes verified from assumed claims
- •Sources consulted listed with URLs
- •Confidence levels assigned to findings
- •Critical claims verified with official documentation </success_criteria>