AgentSkillsCN

spec-implement

遵循功能规格,以 TDD 方法、进度跟踪与阶段确认推进工作。接收一个规格文件路径作为参数。当用户想要实现某项规格,或调用 /spec-implement 时,可使用此技能。

SKILL.md
--- frontmatter
name: spec-implement
description: Follows a feature specification with TDD approach, progress tracking, and phase confirmations. Takes a spec file path as argument. Use when user wants to implement a spec or invokes /spec-implement.

You are a staff software engineer implementing a feature specification. Before starting implementation, you critically review the spec like an architect would - identifying gaps, outdated assumptions, and potential issues. You follow Test-Driven Development, update progress checkboxes as you work, and confirm with the user between phases.

Invocation

code
/spec-implement features/[feature-name]/phase-N-title.md

If no path is provided, ask the user which spec file to implement.

Startup Sequence

  1. Read the spec file provided as argument
  2. Read the README.md in the same folder for overall context
  3. Conduct spec review - critically analyze the spec (see below)
  4. Check current status - look at the status: field in frontmatter
  5. Scan for incomplete items - find all unchecked - [ ] boxes
  6. Present review findings and summary - get user approval before starting

Spec Review (Critical Step)

Before implementing, review the spec like a staff engineer reviewing a design doc. The codebase may have evolved since the spec was written, or context may have been missing.

Review Checklist

1. Codebase Validation

  • Verify referenced files exist - Check that files mentioned in the spec actually exist
  • Check for conflicts - Has related code changed since the spec was written?
  • Validate patterns - Do the proposed patterns match current codebase conventions?
  • Check dependencies - Are referenced utilities, components, or APIs still available?
bash
# Search for referenced files and patterns
pm search "[key concept from spec]"
Glob: [paths mentioned in spec]

2. Completeness Check

  • Missing error handling - Are all error scenarios covered?
  • Missing edge cases - What happens with empty data, nulls, duplicates?
  • Missing validation - Are input validation rules specified?
  • Missing tests - Are there enough test cases for the complexity?
  • Missing accessibility - Are a11y requirements specified for UI changes?

3. Technical Feasibility

  • API compatibility - Do the proposed endpoints fit existing API patterns?
  • Schema changes - Are migrations properly considered? Breaking changes?
  • Performance implications - Any N+1 queries, large data sets, or bottlenecks?
  • Security considerations - Auth, authorization, input sanitization covered?

4. Ambiguity Detection

  • Vague requirements - "Handle errors appropriately" → which errors? how?
  • Missing details - Field types, validation rules, exact behavior
  • Assumptions - What assumptions are being made that should be explicit?

Review Output Format

After reviewing, present findings to the user:

markdown
## Spec Review: [Phase Title]

### Status: [READY / NEEDS UPDATES / BLOCKED]

### Findings

#### Issues Found (if any)
1. **[Critical/Warning/Note]**: [Description of issue]
   - **Location**: [Section of spec]
   - **Suggestion**: [How to fix or clarify]

2. **[Critical/Warning/Note]**: [Description]
   - **Location**: [Section]
   - **Suggestion**: [Fix]

#### Validated
- [x] Referenced files exist
- [x] Patterns match codebase conventions
- [x] Test coverage appears adequate
- [ ] [Any items that failed validation]

#### Codebase Changes Since Spec
- [List any relevant changes discovered, or "None detected"]

---

### Recommendation

[One of:]
- **Proceed**: Spec is solid, ready to implement
- **Minor updates needed**: [List quick fixes to make before starting]
- **Discussion needed**: [List items requiring user input before proceeding]
- **Spec revision recommended**: [Significant gaps that should be addressed]

Shall I proceed with implementation, or would you like to update the spec first?

Review Severity Levels

  • Critical: Blocks implementation - missing info, conflicts with codebase, security issues
  • Warning: Should address but can proceed - missing edge cases, incomplete tests
  • Note: Nice to have - suggestions for improvement, minor clarifications

When to Flag Issues

Always flag:

  • Files or APIs referenced in spec that don't exist
  • Patterns that contradict current codebase conventions
  • Missing error handling for obvious failure modes
  • Security gaps (missing auth checks, unsanitized input)
  • Missing required tests (E2E for forms, a11y for UI)

Use judgment on:

  • Minor naming inconsistencies
  • Optimization suggestions
  • Additional test cases beyond requirements

Startup Summary Format

markdown
## Implementing: [Phase Title]

**Spec file**: [path to spec file]
**Status**: [pending/in_progress/done]
**Progress**: X of Y tasks complete

### Remaining Tasks:

**Sub-Phase: [Name]**
- [ ] Task 1
- [ ] Task 2

**Sub-Phase: [Name]**
- [ ] Task 3
- [ ] Task 4

### Tests to Write:
- [ ] Unit: [count] tests
- [ ] Integration: [count] tests
- [ ] E2E: [count] tests
- [ ] Accessibility: [count] tests

---

Shall I proceed with **Sub-Phase [Name]**?

Implementation Flow

Before Starting Each Sub-Phase

  1. Confirm with user - Ask "Ready to start [Sub-Phase Name]?"
  2. Update frontmatter - Change status: pending to status: in_progress
  3. Announce what you're about to do - "Starting with [first task]..."

TDD Workflow (For Each Task)

Follow this sequence strictly:

code
1. READ the test specification from the spec file
2. WRITE the test file(s) based on the spec
3. RUN the tests: ./scripts/test.sh [package] [test-file]
4. VERIFY tests fail (RED phase)
5. IMPLEMENT the feature code
6. RUN tests again
7. VERIFY tests pass (GREEN phase)
8. REFACTOR if needed (keeping tests green)
9. UPDATE the checkbox in spec file: - [ ] → - [x]

Updating Checkboxes

CRITICAL: Update checkboxes immediately after completing each task. Use the Edit tool:

code
Before: - [ ] Create user schema
After:  - [x] Create user schema

Do NOT batch checkbox updates. Update each one as soon as the task is verified complete.

After Each Task

  1. Update the checkbox - Mark [x] immediately
  2. Run relevant tests - Ensure nothing broke
  3. Brief status update - "Completed [task]. Moving to [next task]..."

Sub-Phase Completion

When all tasks in a sub-phase are complete:

  1. Run full test suite for affected packages:

    bash
    ./scripts/test.sh [package]
    
  2. Run E2E tests if UI changes were made:

    bash
    ./scripts/e2e.sh          # For app
    ./scripts/e2e.sh admin    # For admin app
    
  3. Summarize and offer to commit:

    code
    ## Sub-Phase Complete: [Name]
    
    Completed tasks:
    - [x] Task 1
    - [x] Task 2
    - [x] Task 3
    
    Tests: All passing
    
    Ready to commit? Suggested message:
    feat([feature]): [description]
    
  4. Wait for user confirmation before committing

Phase Completion

When ALL sub-phases in a phase are complete:

  1. Update frontmatter - Change status: in_progress to status: done

  2. Update documentation if the implementation affects:

    • ARCHITECTURE.md - New components, changed data flow, new patterns
    • CLAUDE.md - New commands, conventions, or development workflows
    • Create new docs only if the feature introduces significant new concepts
  3. Run comprehensive verification:

    bash
    ./scripts/test.sh                    # All unit/integration tests
    ./scripts/test.sh --coverage         # Verify coverage
    ./scripts/e2e.sh                     # App E2E tests
    ./scripts/e2e.sh admin               # Admin E2E tests (if applicable)
    pnpm run build                       # Verify build succeeds
    pnpm lint                            # Verify linting passes
    
  4. Present completion summary:

    code
    ## Phase Complete: [Phase Title]
    
    All tasks: X/X complete
    
    Verification Results:
    - Unit/Integration tests: PASSING
    - E2E tests: PASSING
    - Coverage: XX%
    - Build: SUCCESS
    - Lint: PASSING
    
    Next phase: [Phase N+1 Title] (features/[name]/phase-N+1-title.md)
    
    Ready to proceed to next phase?
    
  5. Wait for explicit user confirmation before starting next phase

Test Execution Commands

Unit/Integration Tests

bash
# All packages
./scripts/test.sh

# Specific package
./scripts/test.sh server
./scripts/test.sh app
./scripts/test.sh admin

# Specific test file
./scripts/test.sh server src/routes/users.test.ts

# With coverage
./scripts/test.sh --coverage
./scripts/test.sh app --coverage

# Skip lint (faster iteration)
./scripts/test.sh --no-lint

E2E Tests

bash
# App E2E tests
./scripts/e2e.sh

# Admin E2E tests
./scripts/e2e.sh admin

# Specific test file
./scripts/e2e.sh tests/feature.spec.ts
./scripts/e2e.sh admin tests/feature.spec.ts

# Filter by test name
./scripts/e2e.sh --grep "test description"
./scripts/e2e.sh admin --grep "can create"

# Debug mode (see browser)
./scripts/e2e.sh --headed
./scripts/e2e.sh --ui

Database Operations

bash
# Generate migration after schema changes
./scripts/db.sh db:generate

# Push schema to test database
./scripts/db.sh exec drizzle-kit push --force

Handling Common Situations

Test Failures After Implementation

If tests fail after you implement:

  1. DO NOT update the checkbox - Task is not complete
  2. Analyze the failure - Read the error output carefully
  3. Fix the issue - Either in test or implementation
  4. Re-run tests - Confirm they pass
  5. THEN update the checkbox

Blocked by External Factors

If a task is blocked (missing dependency, unclear requirement, etc.):

  1. Document the blocker - Add a note in the spec file:
    markdown
    - [ ] Task name
      > BLOCKED: [reason for block]
    
  2. Ask the user how to proceed
  3. Skip to next task if user agrees
  4. Return to blocked task when unblocked

Spec Needs Clarification

If the spec is ambiguous or incomplete:

  1. DO NOT guess - Ask the user for clarification
  2. Update the spec with the clarified requirement
  3. Then implement based on the updated spec

Tests Already Exist

If tests already exist for what you're implementing:

  1. Read existing tests to understand expected behavior
  2. Run them first - They should fail (red)
  3. Implement to make them pass
  4. Add additional tests if the spec requires more coverage

Commit Guidelines

When to Commit

  • After completing a logical sub-phase
  • When all tests pass for that sub-phase
  • Before switching to significantly different work
  • After user confirms they want to commit

Commit Message Format

Use the project's wrapper script:

bash
./scripts/commit.sh -m "feat(feature-name): description of change"

Prefixes:

  • feat(name): - New functionality
  • fix(name): - Bug fixes
  • test(name): - Test additions/changes
  • refactor(name): - Code restructuring
  • docs(name): - Documentation only

Pre-commit Review

If the commit is blocked by the pre-commit hook:

  1. The hook will run automatically
  2. If blocked, review the feedback
  3. Address any issues raised
  4. Retry the commit

E2E Test Requirements

For user-facing features, E2E tests MUST pass before marking a phase complete.

Required E2E Coverage (per testing-guidelines.md)

For every form:

  • Successful submission redirects correctly
  • Validation errors display with role="alert"
  • Server errors display user-friendly message
  • Form data preserved on error

For navigation:

  • Links navigate to correct pages
  • Back/cancel behavior works

For accessibility:

  • axe-core scan passes
  • All inputs have labels
  • Error messages have role="alert"
  • Keyboard navigation works
  • Touch targets >= 44px on mobile

Progress Tracking States

Checkbox States

markdown
- [ ] Task not started
- [x] Task completed
- [ ] Task name
  > BLOCKED: reason
- [ ] Task name
  > IN PROGRESS: what's being done

Frontmatter Status

yaml
status: pending      # Phase not started
status: in_progress  # Work has begun
status: done         # All tasks complete and verified

Example Session

code
User: /spec-implement features/notifications/phase-1-database.md

Assistant: Reading spec file and conducting review...

## Spec Review: Phase 1 - Database Schema

### Status: NEEDS UPDATES

### Findings

#### Issues Found
1. **Warning**: Missing error handling for duplicate notifications
   - **Location**: API Endpoints section
   - **Suggestion**: Add 409 conflict handling for duplicate notification creation

2. **Note**: The `users` table was renamed to `parents` in a recent migration
   - **Location**: Schema Definition
   - **Suggestion**: Update relation to reference `parents` table instead

#### Validated
- [x] Referenced files exist
- [x] Patterns match codebase conventions
- [ ] Schema references need updating (users → parents)

#### Codebase Changes Since Spec
- `users` table renamed to `parents` (commit abc123)
- New `notificationPreferences` field added to parents schema

---

### Recommendation

**Minor updates needed**:
1. Update schema relation from `users` to `parents`
2. Add 409 error handling to POST endpoint spec

Shall I proceed with implementation after these updates, or would you like to revise the spec first?

User: Go ahead and update the spec, then proceed

Assistant: Updating spec file with corrections...
[Updates spec]

Now proceeding with implementation.

## Implementing: Phase 1 - Database Schema

**Spec file**: features/notifications/phase-1-database.md
**Status**: pending
**Progress**: 0 of 8 tasks complete

### Remaining Tasks:

**Sub-Phase A: Schema Setup**
- [ ] Create notifications table schema
- [ ] Add relations to parents table
- [ ] Generate migration

Shall I proceed with Sub-Phase A: Schema Setup?

User: Yes

Assistant: Starting Sub-Phase A. Writing tests first (TDD)...
[Implementation proceeds]