Writing Plans
Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Suggest frequent commits.
When to Use
Use this skill when you have a specification or requirements for a multi-step task, before you start writing any implementation code. It is essential for ensuring a structured and verifiable development process.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Output: Present the full plan in the chat using a Markdown code block.
Bite-Sized Task Granularity
Each step is one action (2-5 minutes):
- •"Write the failing test" - step
- •"Run it to make sure it fails" - step
- •"Implement the minimal code to make the test pass" - step
- •"Run the tests and make sure they pass" - step
- •"Suggest Commit" - step
Plan Document Header
Every plan MUST start with this header:
# [Feature Name] Implementation Plan > **For OpenCode:** Use the `executing-plans` skill to implement this plan task-by-task. **Goal:** [One sentence describing what this builds] **Architecture:** [2-3 sentences about approach] **Tech Stack:** [Key technologies/libraries] ---
Task Structure
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
Step 2: Run test to verify it fails
Run: pytest tests/path/test.py::test_name -v
Expected: FAIL with "function not defined"
Step 3: Write minimal implementation
def function(input):
return expected
Step 4: Run test to verify it passes
Run: pytest tests/path/test.py::test_name -v
Expected: PASS
Step 5: Suggest Commit
Suggest: git add tests/path/test.py src/path/file.py && git commit -m "feat: add specific feature"
## Remember - Exact file paths always - Complete code in plan (not "add validation") - Exact commands with expected output - Reference relevant skills with @ syntax - DRY, YAGNI, TDD, suggest frequent commits ## Execution Handoff After presenting the plan, offer execution choice: **"Plan complete. Three execution options:** **1. Subagent-Driven (this session)** - I dispatch fresh subagent per task, review between tasks, fast iteration. (Recommended for complex, multi-step plans) **2. Parallel Session (separate)** - Open new session with executing-plans, batch execution with checkpoints (you will need to provide the plan to the new session) **3. Direct Execution (this session)** - I implement the plan directly in this session. (Recommended for simple, linear, or low-risk plans) **Which approach?"** **If Subagent-Driven chosen:** - Use the `subagent-driven-development` skill. - Stay in this session - Fresh subagent per task + code review **If Parallel Session chosen:** - Guide them to open new session in worktree - The new session should use the `executing-plans` skill. **If Direct Execution chosen:** - Implement the tasks directly. - Use `todowrite` to track progress. - Request review only if needed or at the end.