Task Decomposer Skill
Overview
The task_decomposer skill breaks approved specifications or implementation plans into atomic, Claude Code-executable tasks for hackathon development. It translates high-level architectural plans into concrete, testable, and sequenced task lists that follow Spec-Driven Development (SDD) principles.
Skill Metadata
- •Name: task_decomposer
- •Description: Breaks approved specifications or plans into atomic, Claude Code-executable tasks
- •Expected Input: High-level execution or implementation plan
- •Expected Output: Ordered list of atomic tasks with dependencies and acceptance criteria
- •Usage Example: Used to convert Phase I plan into actionable tasks for implementation
Constraints
- •No implementation code generation
- •No dependencies on external task management systems
- •Single file implementation
- •Output must conform to Spec-Kit Plus tasks.md format
Decomposition Rules
Rule 1: Atomic Task Definition
Each task must represent a single, independently executable unit of work:
**Atomic Task Criteria**: - ✓ One clear deliverable or outcome - ✓ Can be completed in one Claude Code session (typically 15-30 min) - ✓ Has specific, testable acceptance criteria - ✓ Does not depend on future tasks (only past completed tasks) - ✗ Multi-step workflows without clear boundaries - ✗ Vague objectives without measurable outcomes - ✗ Dependencies on unscheduled or incomplete work
Example Atomic Task:
Subject: Create Task class with basic attributes Description: Implement a Task class with id, title, description, and completed status. Include __init__ method and __str__ representation for debugging. Store tasks in-memory using a Python dictionary. Acceptance Criteria: - Task class can be instantiated with required parameters - Task attributes are accessible and modifiable - __str__ returns readable task representation - Can store multiple tasks in dictionary by id
Non-Atomic (Bad Example):
Subject: Build todo app Description: Create a complete todo application
Rule 2: Task Sequencing and Dependencies
Tasks must be ordered logically with clear dependency chains:
**Sequencing Rules**: - ✓ Task 1: Foundation/data structures (no dependencies) - ✓ Task 2: Core functions (depends on Task 1) - ✓ Task 3: CLI interface (depends on Tasks 1-2) - ✓ Task 4: Integration/testing (depends on all previous) - ✓ Use explicit "addBlockedBy" references for dependencies - ✗ Circular dependencies between tasks - ✗ Tasks that depend on future work - ✗ Parallel tasks with hidden dependencies
Dependency Examples:
# Good - Clear dependency chain Task 1: Design Task data model Task 2: Implement add_task function (blockedBy: Task 1) Task 3: Implement delete_task function (blockedBy: Task 1) Task 4: Create CLI menu (blockedBy: Task 2, Task 3) # Bad - Unclear dependencies Task 1: Some work Task 2: More work Task 3: Even more work
Rule 3: Test-Driven Task Structure
Every implementation task must have a corresponding test task:
**TDD Task Pattern**: For every feature or function, create: 1. RED task: Write tests that fail (specifies behavior) 2. GREEN task: Implement minimal code to pass tests 3. Optional REFACTOR task: Clean up implementation **Test-First Enforcement**: - ✓ Test tasks come before implementation tasks - ✓ Acceptance criteria are testable scenarios - ✓ Includes both happy path and error cases - ✓ Considers edge cases and boundary conditions - ✗ Implementation tasks without test coverage - ✗ Tests written after implementation is complete
Example TDD Sequence:
Task 1: Write tests for add_task function (RED) - Test adding valid task succeeds - Test adding task with empty title fails - Test adding duplicate task fails Task 2: Implement add_task function (GREEN) - Depends on: Task 1 - Implementation should pass all tests from Task 1 Task 3: Refactor add_task for clarity (REFACTOR) - Depends on: Task 2 - Optional: Improve code structure while keeping tests green
Rule 4: Acceptance Criteria Quality
Each task must have clear, verifiable acceptance criteria:
**Acceptance Criteria Rules**:
- ✓ Specific and measurable outcomes
- ✓ Includes test commands or validation steps
- ✓ Defines both success and error scenarios
- ✓ Uses Given-When-Then format where appropriate
- ✓ Can be verified by a reviewer without implementation knowledge
- ✗ Subjective or ambiguous language ("should work", "looks good")
- ✗ Missing error case handling
- ✗ Criteria that require implementation details to verify
Good Acceptance Criteria:
Acceptance Criteria:
- Given: Task list is empty
When: User adds task with title "Buy groceries"
Then: Task list contains 1 task with title "Buy groceries" and completed=False
- Given: Task list has 3 tasks
When: User marks task #2 as complete
Then: Task #2.completed is True, other tasks remain unchanged
- Run pytest tests/test_todo.py - all tests pass
Bad Acceptance Criteria:
Acceptance Criteria: - Code should be clean - Functions should work properly - Look at the implementation to verify
Rule 5: Task Granularity Sizing
Tasks must be appropriately sized for Claude Code execution:
**Sizing Guidelines**: - Target completion time: 15-30 minutes per task - Code scope: 1-3 functions or 20-50 lines of code - Complexity: Single concern or responsibility - If task feels too large, decompose further - If task feels trivial (<5 min), batch with related work **Sizing Examples**:
Too Large: "Build complete todo app with all features" Appropriate: "Implement add_task function with data validation" Too Small: "Add docstring to function" (batch with implementation)
## Task Output Format The skill generates tasks in Spec-Kit Plus format: ```markdown # [Feature Name] - Implementation Tasks **Feature**: [Brief feature description] **Phase**: [Phase number/name] **Target**: [Expected completion timeframe] ## Task 1: [Task Subject] **Status**: pending **Blocks**: Task 2, Task 3 [Detailed description with context, requirements, and approach] **Acceptance Criteria**: - [Criterion 1 with verification method] - [Criterion 2 with verification method] - [Criterion 3 with verification method] --- ## Task 2: [Task Subject] **Status**: pending **Blocked By**: Task 1 **Blocks**: Task 4 [Detailed description] **Acceptance Criteria**: - [Criterion 1] - [Criterion 2] --- ## Task 3: [Task Subject] **Status**: pending **Blocked By**: Task 1 [Detailed description] **Acceptance Criteria**: - [Criterion 1] - [Criterion 2]
Usage Instructions
Command Line Interface (CLI)
# Decompose a plan into tasks claude --skill task_decomposer "Break down Phase I plan into executable tasks" # Generate tasks from specification claude --skill task_decomposer "Convert spec.md into tasks.md format" # Include TDD pattern claude --skill task_decomposer "Create tasks for user authentication with TDD" # Specify complexity level claude --skill task_decomposer "Decompose complex plan with detailed dependencies"
Interactive Mode
- •Provide approved specification (spec.md) or implementation plan (plan.md)
- •Identify target feature or component to decompose
- •The skill applies Rules 1-5 to break down into atomic tasks
- •Review generated tasks for logical sequence and dependencies
- •Verify each task has clear acceptance criteria
- •Save tasks to
specs/<feature>/tasks.md - •Use TaskCreate tool to track tasks in the system
Integration with Spec-Kit Plus
# Full SDD workflow claude --skill spec_writer "Create specification for Phase I CLI" claude --skill task_decomposer "Convert spec.md to tasks.md for Phase I" claude --skill constitution_guard "Validate tasks.md against constitution" # Then implement using Claude Code with the generated tasks
Input Validation & Constraints
Required Input Format
- •Type: Specification (spec.md) or implementation plan (plan.md)
- •Format: Markdown following Spec-Kit Plus structure
- •Minimum content: Feature overview, requirements, and acceptance criteria
- •Completeness: Must be approved/peer-reviewed before decomposition
Input Schema Validation
The skill validates input against these criteria:
- •Must have clear feature boundaries and scope
- •Must include functional requirements
- •Must have defined acceptance criteria
- •Should identify technical approach or constraints
- •Preferably includes architectural decisions (from plan.md)
Invalid Input Handling
- •No spec/plan: Returns error "Must provide specification or plan to decompose"
- •Unclear scope: Requests clarification on feature boundaries
- •Missing requirements: Asks for functional requirements before decomposition
- •Vague acceptance criteria: Warns that tasks cannot be validated without clear success criteria
Output Format Options
- •Default: Spec-Kit Plus tasks.md format
- •Alternative: JSON format for programmatic processing
- •Include levels: Optional task metadata (estimates, priorities, tags)
Version Compatibility
- •Claude Code Version: Compatible with v2.0 and above
- •Spec-Kit Plus: Compatible with v1.0 and above
- •Task Management: Compatible with TaskCreate/TaskUpdate tools
- •Last tested: 2026-01-27
Decomposition Patterns
Pattern 1: Data Model First (Recommended)
For features involving data structures:
Task 1: Design data model/schema Task 2: Implement core CRUD operations (Create, Read) Task 3: Implement Update and Delete operations Task 4: Add validation and error handling Task 5: Create CLI interface Task 6: Integration testing
Pattern 2: TDD Red-Green-Refactor
Strict TDD approach:
Task 1: Write failing tests for Feature X (RED) Task 2: Implement minimal code to pass tests (GREEN) Task 3: Refactor implementation (REFACTOR) Task 4: Repeat for next feature component
Pattern 3: Vertical Slice (MVP)
End-to-end functionality early:
Task 1: Implement minimal E2E for core feature Task 2: Enhance data model Task 3: Add error handling Task 4: Improve CLI/UI Task 5: Add remaining edge cases
Pattern 4: Dependency Chain
For complex features with infrastructure needs:
Task 1: Set up project structure and dependencies Task 2: Implement utility functions/helpers Task 3: Build core business logic Task 4: Add API/endpoints Task 5: Create user interface Task 6: Testing and documentation
Task Metadata Standards
Required Metadata for Each Task
{
"subject": "Brief title (imperative form)",
"activeForm": "Present continuous for in-progress display",
"description": "Detailed requirements and context",
"status": "pending|in_progress|completed",
"acceptanceCriteria": ["Specific, testable outcomes"]
}
Optional Metadata
{
"estimatedTime": "15-30 minutes (for planning)",
"priority": "P1|P2|P3",
"tags": ["test", "implementation", "refactor"],
"complexity": "low|medium|high",
"blocks": ["Task 2", "Task 3"],
"blockedBy": ["Task 1"]
}
Validation and Error Handling
Self-Validation Checks
Before outputting tasks, the skill verifies:
- •Completeness: Every spec requirement maps to at least one task
- •Sequence: No circular dependencies; clear execution order
- •Test Coverage: Implementation tasks have corresponding test tasks
- •Atomicity: No tasks exceed 30-minute estimate
- •Acceptance Criteria: Every task has verifiable success criteria
Task Quality Gates
- •P0 (Critical): Must pass - TDD order, atomicity, dependencies
- •P1 (Important): Should pass - Clear acceptance criteria, sizing
- •P2 (Nice-to-have): Optional metadata (estimates, tags)
Error Recovery
If decomposition fails:
- •Identify which Rule (1-5) was violated
- •Provide specific examples from input that caused failure
- •Suggest how to restructure plan or spec
- •Offer to decompose smaller sections incrementally
Examples
Example 1: Phase I Todo Spec → Tasks
Input: Specification for in-memory Python console todo app with features: add, delete, update, view, mark complete
Output:
# Phase I Todo - Implementation Tasks **Feature**: In-memory Python Console Todo App **Phase**: Phase I **Target**: 2-3 hours ## Task 1: Design Task data model **Status**: pending **Blocks**: Task 2, Task 3, Task 4, Task 5 Create the core Task class to represent todo items. Include attributes: id (int), title (str), description (str, optional), completed (bool). Implement __init__ and __str__ methods. **Acceptance Criteria**: - Task class can be instantiated with title parameter - id, description, completed have sensible defaults - __str__ returns a readable format showing title and completion status - Can create Task objects and access all attributes --- ## Task 2: Write tests for CRUD operations (RED) **Status**: pending **Blocked By**: Task 1 **Blocks**: Task 6, Task 7, Task 8, Task 9 Write comprehensive tests for Create, Read, Update, Delete operations before implementation. Include tests for success cases and error handling. **Acceptance Criteria**: - Tests for adding tasks (valid and invalid input) - Tests for viewing/reading tasks - Tests for updating tasks - Tests for deleting tasks - Tests for marking tasks complete - All tests are written but FAIL (RED phase) - Run pytest and confirm tests fail appropriately --- ## Task 3: Implement task storage (dictionary) **Status**: pending **Blocked By**: Task 1 **Blocks**: Task 6, Task 7, Task 8, Task 9 Create in-memory storage using Python dictionary to store tasks with task.id as key. **Acceptance Criteria**: - Global or class-level storage dict exists - Can store Task objects with integer keys - Can retrieve tasks by ID - Storage persists during application runtime - Test accessing storage directly (integration test) --- ## Task 4: Implement add_task function (GREEN) **Status**: pending **Blocked By**: Task 2, Task 3 **Blocks**: Task 10 Create function to add new tasks to storage. Include input validation for title (required, non-empty). **Acceptance Criteria**: - add_task(title, description="") creates and stores new Task - Returns the created Task object or its ID - Validates title is non-empty string - Raises ValueError for invalid input - Tests from Task 2 now PASS (GREEN phase) - Run pytest to confirm --- ## Task 5: Implement view_tasks function **Status**: pending **Blocked By**: Task 2, Task 3 **Blocks**: Task 10 Create function to retrieve and display all tasks. Support filtering by completion status. **Acceptance Criteria**: - view_tasks() returns list of all tasks - view_tasks(completed=True) returns only completed tasks - view_tasks(completed=False) returns only incomplete tasks - Returns empty list when no tasks exist - Tests from Task 2 PASS --- ## Task 6: Implement update_task function **Status**: pending **Blocked By**: Task 2, Task 4 **Blocks**: Task 10 Create function to update existing task attributes (title, description). **Acceptance Criteria**: - update_task(task_id, title=None, description=None) modifies task - Updates only provided fields - Validates title if provided (non-empty) - Raises ValueError for invalid task_id - Raises ValueError for invalid title - Tests from Task 2 PASS --- ## Task 7: Implement delete_task function **Status**: pending **Blocked By**: Task 2, Task 4 Create function to remove tasks from storage. **Acceptance Criteria**: - delete_task(task_id) removes task from storage - Returns True on success - Raises ValueError for invalid task_id - Storage no longer contains deleted task - Tests from Task 2 PASS --- ## Task 8: Implement mark_complete function **Status**: pending **Blocked By**: Task 2, Task 4 **Blocks**: Task 10 Create function to toggle task completion status. **Acceptance Criteria**: - mark_complete(task_id) toggles completed status - Returns updated Task object - Raises ValueError for invalid task_id - Can mark complete → incomplete and incomplete → complete - Tests from Task 2 PASS --- ## Task 9: Implement CLI menu interface **Status**: pending **Blocked By**: Task 4, Task 5, Task 6, Task 7, Task 8 Create command-line interface with menu options for all operations. **Acceptance Criteria**: - Displays menu with options: 1-Add, 2-View, 3-Update, 4-Delete, 5-Mark Complete, 6-Exit - Each option triggers appropriate function - Input validation for menu choices - Loop continues until user selects Exit - Clear screen or formatting between operations - Manual test: Run app and verify each menu option works --- ## Task 10: Create integration test and run full demo **Status**: pending **Blocked By**: Task 9 Create comprehensive integration test and demonstrate full application functionality. **Acceptance Criteria**: - Integration test exercises all functions: add, view, update, delete, mark complete - Test uses CLI interface programmatically or tests core functions in sequence - All unit tests from Task 2 PASS - Integration test PASS - Manual demo: Run application and perform complete workflow - Screenshots or console output captured --- Total Tasks: 10 Estimated Time: 2.5-3 hours
Example 2: Simple Task Decomposition
Input: "Create a Python function to calculate fibonacci numbers"
Output:
## Task 1: Write tests for fibonacci function **Description**: Before implementation, write tests that define expected behavior. Include tests for base cases (n=0, n=1), positive integers, and error cases (negative numbers, non-integers). **Acceptance Criteria**: - Test fibonacci(0) returns 0 - Test fibonacci(1) returns 1 - Test fibonacci(5) returns 5 - Test fibonacci(10) returns 55 - Test fibonacci(-1) raises ValueError - Test fibonacci(3.5) raises ValueError - Run pytest - all tests fail (RED phase) --- ## Task 2: Implement fibonacci function **Blocked By**: Task 1 **Description**: Implement recursive or iterative fibonacci calculation that passes all tests from Task 1. Include input validation. **Acceptance Criteria**: - Function calculates fibonacci numbers correctly - Validates input (non-negative integer) - Raises appropriate errors for invalid input - Run pytest - all tests pass (GREEN phase) - Code is minimal and focused --- ## Task 3: Refactor for performance (Optional) **Blocked By**: Task 2 **Description**: If needed, optimize the implementation (e.g., add memoization) while keeping tests green. **Acceptance Criteria**: - All tests from Task 1 still pass - Performance improvement measured (if applicable) - Code is clean and well-documented
Example 3: Detecting Poor Input
Input: "Build the app"
Output:
Error: Input is too vague for task decomposition Issues found: - No feature boundaries defined - Missing requirements or acceptance criteria - Cannot determine atomic tasks - Unclear what "app" refers to Resolution: 1. Create specification first using: claude --skill spec_writer "Define requirements for todo app" 2. Create implementation plan: Run '/sp.plan' with the spec 3. Then use task_decomposer on the resulting plan Cannot decompose without clear specification or plan.
Integration with Claude Code Tools
Workflow Integration
# 1. Create spec (output: specs/phase-i/spec.md) claude --skill spec_writer "Create Phase I specification" # 2. Create plan (output: specs/phase-i/plan.md) # Human/architect creates or reviews architectural plan # 3. Decompose into tasks (output: specs/phase-i/tasks.md) claude --skill task_decomposer "Convert plan.md to executable tasks" # 4. Validate against constitution claude --skill constitution_guard "Validate tasks.md" # 5. Create tasks in system # Use TaskCreate for each task or batch import # 6. Implement with Claude Code # Work through tasks sequentially using generated tasks.md as guide # 7. Update task status # Use TaskUpdate to mark tasks in_progress and completed
TaskCreate Integration
When tasks are decomposed, they can be directly created:
# Example: After decomposition, automatically create tasks
for task in decomposed_tasks:
TaskCreate(
subject=task['subject'],
activeForm=task['activeForm'],
description=task['description'],
metadata={"phase": "phase-i", "feature": "todo-cli"}
)
Best Practices
For Plan Authors
- •Provide clear specifications: The better the input spec/plan, the better the task decomposition
- •Define acceptance criteria: Clear success criteria enable better task definitions
- •Identify constraints: Technical constraints help sequence tasks appropriately
- •Specify architecture: High-level architecture guides task organization
For Task Decomposition
- •Start with data model: Most applications benefit from defining data structures first
- •Apply TDD rigorously: Tests before implementation catches issues early
- •Consider dependencies: Think about what each task needs to be completed
- •Batch tiny tasks: Group trivial changes (<5 min) with related work
- •Validate atomicity: Each task should have ONE clear deliverable
For Task Execution
- •Follow the order: Don't skip ahead to blocked tasks
- •Test continuously: Run tests after every implementation task
- •Update status: Keep task status current (pending → in_progress → completed)
- •Document learnings: Update tasks.md if new tasks or dependencies discovered
Common Decomposition Patterns
| Feature Type | Recommended Pattern | Example Tasks |
|---|---|---|
| Data Model | Model First | 1. Design schema<br>2. Implement model<br>3. Validation |
| API Endpoint | TDD | 1. Write endpoint tests<br>2. Implement handler<br>3. Integration test |
| CLI Tool | Vertical Slice | 1. E2E happy path<br>2. Error handling<br>3. Polish |
| UI Component | Component-First | 1. Component structure<br>2. Styling<br>3. Interaction |
| Database Change | Migration-First | 1. Migration<br>2. Model updates<br>3. Test updates |
Troubleshooting
Problem: Too Many Tasks
Solution: Batch related small tasks:
# Instead of: Task 1: Add docstring to function A Task 2: Add docstring to function B Task 3: Add docstring to function C # Combine: Task 1: Add docstrings to functions A, B, and C
Problem: Tasks Still Too Large
Solution: Decompose further:
# If: Task 1: Implement complete user authentication # Decompose to: Task 1: Create User model Task 2: Implement password hashing utility Task 3: Create login function Task 4: Create register function Task 5: Add authentication middleware
Problem: Unclear Dependencies
Solution: Use temporary sequencing:
Task 1: Research API requirements Task 2: Design API interface (BlockedBy: Task 1) Task 3: Implement API client (BlockedBy: Task 2)
Problem: Can't Test Before Implementing
Solution: Adjust TDD for reality:
Task 1: Spike/research approach (no tests) Task 2: Write tests for discovered approach Task 3: Implement based on tests
Integration with Hackathon II Phases
Phase I (Console App)
- •Input: Simple spec with 5 basic features
- •Output: 8-12 tasks focused on data model and CRUD
- •Pattern: Data Model First + TDD
Phase II (Full-Stack Web)
- •Input: Spec with Next.js frontend, FastAPI backend, Neon DB
- •Output: 15-25 tasks across frontend, backend, and integration
- •Pattern: Vertical Slice + Parallel tracks (frontend/backend)
Phase III (AI Chatbot)
- •Input: Spec with OpenAI Agents SDK, natural language processing
- •Output: 20-30 tasks including model integration and conversation flow
- •Pattern: Component-First + Integration testing
Phase IV (Kubernetes Deployment)
- •Input: Spec with Docker, Minikube, Helm charts
- •Output: 12-18 tasks for containerization and deployment
- •Pattern: Infrastructure-First + Configuration management
Phase V (Cloud Deployment)
- •Input: Spec with Kafka, Dapr, DigitalOcean DOKS
- •Output: 25-35 tasks for cloud-native architecture
- •Pattern: Service-First + Event-driven patterns
Skill Version: 1.0.0 Last Updated: 2026-01-27 Compatibility: Claude Code v2.0+, Spec-Kit Plus v1.0+ Project: Hackathon II - Evolution of Todo
Maintenance Notes
For Skill Updates
- •When new Claude Code features emerge, update task patterns
- •Maintain compatibility with TaskCreate/TaskUpdate tool signatures
- •Test decomposition with sample specs from each hackathon phase
- •Update patterns based on real usage feedback
For Users
- •Review generated tasks before implementation
- •Adjust task estimates based on your experience level
- •Don't hesitate to further decompose tasks if needed
- •Document tasks.md location for team visibility
End of Skill Definition