Session Debrief
Review this session for mistakes, corrections, and learnings. Generate concrete rule updates.
[GOAL]
Identify what went wrong, what was missing, and what went well in this session. Convert findings into specific, actionable rule updates across CLAUDE.md, Chief-of-Staff.md, memory files, and skills.
[PROCESS]
Step 0: Self-Review (scan the conversation)
Scan the full conversation for each category below. List every instance found, not just the first.
0a. Corrections - Where the user said "no", "that's wrong", "I meant...", "not that", or redirected you.
- •What you did vs what they wanted
- •Rule:
when X, do Y instead of Z
0b. Failed approaches - Things tried 2+ times before working, tool calls that errored, dead ends.
- •What failed and why
- •Rule:
avoid X because Y; instead do Z
0c. Repeated instructions - Things the user had to say more than once, or context they had to re-provide.
- •What should have been known from existing docs
- •Rule:
always check X before doing Y
0d. Frustration signals - Pushback, exasperation, "I already said...", course corrections, user taking over.
- •What triggered it
- •Rule:
don't X without asking first
0e. What worked well - Approaches the user praised, things that went smoothly, good patterns to reinforce.
- •What made it work
- •Rule:
continue doing X when Y
If no instances found for a category, say "None detected" and move on.
Step 1: Reflect on missing context
What context was missing that would have helped from the start?
- •Commands or patterns discovered mid-session
- •Environment/configuration quirks encountered
- •Code style patterns that weren't documented
- •Testing approaches that worked
- •Gotchas or warnings encountered
Step 2: Find target files
Identify where each finding belongs:
| Target | When to use |
|---|---|
CLAUDE.md (project) | Team-shared patterns, project-specific rules |
~/.claude/CLAUDE.md (global) | Cross-project behaviors, persona rules |
Cockpit/Chief-of-Staff.md | Facts, preferences, project context used daily |
memory/topics/*.md | Detailed context for specific domains |
skills/*.md | Skill-specific improvements |
rules/*.md | Coding guideline updates |
Read each target file before proposing edits.
Step 3: Draft updates
For each finding, draft a concrete update:
### Update: <filepath> **Finding:** <what happened> **Rule:** <when X, do Y instead of Z> \`\`\`diff + <the addition - one line per concept, keep it brief> \`\`\`
Keep it concise - one line per concept. These files are part of the prompt, so brevity matters.
Avoid:
- •Vague self-improvement ("be more careful")
- •Obvious information
- •One-off fixes unlikely to recur
- •Duplicating rules already present
Step 4: Present summary
Show the user:
## Session Debrief ### Corrections (X found) - [list each with one-line rule] ### Failed Approaches (X found) - [list each] ### Missing Context (X found) - [list each] ### What Worked Well (X found) - [list each] ### Proposed Updates (X total) [Step 3 output]
Step 5: Apply with approval
Ask {USER_NAME} which changes to apply. Only edit files they approve.
After applying, confirm what was changed and where.
[IMPORTANT]
- •Be specific - "when working with Supabase MCP, always start tunnels first" not "remember to prepare"
- •Be honest - If the session went perfectly, say so. Don't manufacture findings.
- •No self-flagellation - State facts, propose fixes. Skip the apologies.
- •Cite the moment - Reference what the user said or what failed, so they can verify
- •Deduplicate - Check existing rules before adding new ones
- •One line per rule - These go into prompt context. Every word costs tokens.