Ask a Co-Worker
Get a fresh perspective from a different AI model when you're stuck, looping, or want to validate an approach. Sometimes all it takes is a different way of looking at the problem.
When to Use This Skill
Auto-invoke (Claude should proactively use this when):
- •You've attempted the same approach 3+ times and it keeps failing
- •You're going in circles — reverting changes, re-trying the same fix
- •An error message is opaque and your debugging isn't converging
- •You need a fundamentally different architectural perspective
- •You've exhausted your ideas and are about to tell the user "I'm stuck"
Manual invoke (user triggers with /ask-coworker):
- •User explicitly wants a second opinion
- •User wants to compare approaches across models
- •Brainstorming — getting multiple angles on a design decision
Available Co-Workers
Always use the best coding models. The point is expert-level second opinions, not saving tokens.
Codex Models (Responses API) — Best for Code
These are OpenAI's dedicated coding models. Use ask-codex.py (they require the Responses API, not Chat Completions).
| Model | Best For |
|---|---|
codex-mini-latest | Default for code. Fast, strong at debugging and code generation. Based on o4-mini. |
gpt-5.2-codex | Heaviest coding model. Use for complex architectural questions. |
Chat Models (llm CLI)
General-purpose models available via llm. Use for architecture, design, and non-code questions.
| Model | Alias | Best For |
|---|---|---|
o3 | — | Complex reasoning, multi-step debugging |
o4-mini | — | Fast reasoning, good cost/quality balance |
gpt-4.1 | 4.1 | General coding and architecture |
Model Selection
- •Code questions →
codex-mini-latest(default) orgpt-5.2-codex(complex) - •Architecture/design →
o3for deep reasoning orgpt-4.1for quick takes
How to Ask
Two tools available
ask-codex.py — for Codex models (Responses API):
bash
python C:/Users/robert/project/ask-coworker/ask-codex.py "Your question here" python C:/Users/robert/project/ask-coworker/ask-codex.py -m gpt-5.2-codex "Your question here" python C:/Users/robert/project/ask-coworker/ask-codex.py -s "You are a TypeScript expert" "Your question here"
llm — for chat/reasoning models (on PATH):
bash
llm -m o3 "Your question here" llm -m gpt-4.1 "Your question here" llm -m gpt-4.1 -s "You are a senior backend engineer" "Your question here"
Rules for Asking Good Questions
DO:
- •Include the error message — paste the exact error, not a summary
- •Include the relevant code snippet — 10-30 lines of context, not entire files
- •State what you've tried — "I tried X and Y, both failed because Z"
- •Ask a specific question — "Why might this deadlock?" not "Help"
- •Include the tech stack — "Node 20, TypeScript, PostgreSQL 16, Express"
DON'T:
- •Paste entire files (token waste, dilutes the question)
- •Ask vague questions ("What's wrong with this code?")
- •Use this as a first resort — try solving it yourself first
- •Blindly trust the response — validate before applying
Question Templates
Debugging a persistent error (use Codex)
bash
python C:/Users/robert/project/ask-coworker/ask-codex.py -s "You are a debugging expert. Be concise and specific." " I'm getting this error repeatedly and can't figure out why: ERROR: [paste exact error] Relevant code: [10-30 lines of relevant code] What I've tried: - [approach 1 and why it failed] - [approach 2 and why it failed] Tech stack: [languages, frameworks, versions] What am I missing? "
Architecture/design decision (use o3)
bash
llm -m o3 " I need to decide between these approaches for [problem]: Option A: [description] Option B: [description] Context: [constraints, scale, team size, timeline] Give pros/cons and a clear recommendation. "
Alternative approach brainstorm
bash
python C:/Users/robert/project/ask-coworker/ask-codex.py -m gpt-5.2-codex -s "Suggest 3 completely different approaches. Be creative." " Current approach: [what you're doing] Problem: [why it's not working] Constraints: [what must be preserved] "
Interpreting Responses
After receiving a co-worker's response:
- •Don't blindly apply suggestions — evaluate them critically
- •Look for the insight, not the exact code — the value is the different perspective
- •If the suggestion doesn't work, that's fine — it may still have shifted your thinking
- •Report back to the user — summarize what the co-worker suggested and your assessment:
- •"I asked Codex about the deadlock. It suggested [X]. I think that's worth trying because [Y]."
- •"o3 suggested [approach], but I don't think it applies here because [reason]. However, it did make me realize [insight]."
Checking Available Models
bash
# List all installed models llm models list # Check which keys are configured llm keys list
Troubleshooting
| Issue | Fix |
|---|---|
| "API key not found" | Run llm keys path to find keys.json, verify keys exist |
| Timeout on long responses | Add --no-stream flag for llm, or increase timeout in ask-codex.py |
| Want to review past queries | llm logs list shows conversation history (llm only, not codex) |