AgentSkillsCN

ask-coworker

当遇到瓶颈或陷入循环时,不妨向不同的大语言模型寻求全新视角

SKILL.md
--- frontmatter
name: ask-coworker
description: Ask a different LLM for a fresh perspective when stuck or looping
argument-hint: "[question or 'help me with...']"

Ask a Co-Worker

Get a fresh perspective from a different AI model when you're stuck, looping, or want to validate an approach. Sometimes all it takes is a different way of looking at the problem.

When to Use This Skill

Auto-invoke (Claude should proactively use this when):

  • You've attempted the same approach 3+ times and it keeps failing
  • You're going in circles — reverting changes, re-trying the same fix
  • An error message is opaque and your debugging isn't converging
  • You need a fundamentally different architectural perspective
  • You've exhausted your ideas and are about to tell the user "I'm stuck"

Manual invoke (user triggers with /ask-coworker):

  • User explicitly wants a second opinion
  • User wants to compare approaches across models
  • Brainstorming — getting multiple angles on a design decision

Available Co-Workers

Always use the best coding models. The point is expert-level second opinions, not saving tokens.

Codex Models (Responses API) — Best for Code

These are OpenAI's dedicated coding models. Use ask-codex.py (they require the Responses API, not Chat Completions).

ModelBest For
codex-mini-latestDefault for code. Fast, strong at debugging and code generation. Based on o4-mini.
gpt-5.2-codexHeaviest coding model. Use for complex architectural questions.

Chat Models (llm CLI)

General-purpose models available via llm. Use for architecture, design, and non-code questions.

ModelAliasBest For
o3Complex reasoning, multi-step debugging
o4-miniFast reasoning, good cost/quality balance
gpt-4.14.1General coding and architecture

Model Selection

  • Code questions → codex-mini-latest (default) or gpt-5.2-codex (complex)
  • Architecture/design → o3 for deep reasoning or gpt-4.1 for quick takes

How to Ask

Two tools available

ask-codex.py — for Codex models (Responses API):

bash
python C:/Users/robert/project/ask-coworker/ask-codex.py "Your question here"
python C:/Users/robert/project/ask-coworker/ask-codex.py -m gpt-5.2-codex "Your question here"
python C:/Users/robert/project/ask-coworker/ask-codex.py -s "You are a TypeScript expert" "Your question here"

llm — for chat/reasoning models (on PATH):

bash
llm -m o3 "Your question here"
llm -m gpt-4.1 "Your question here"
llm -m gpt-4.1 -s "You are a senior backend engineer" "Your question here"

Rules for Asking Good Questions

DO:

  1. Include the error message — paste the exact error, not a summary
  2. Include the relevant code snippet — 10-30 lines of context, not entire files
  3. State what you've tried — "I tried X and Y, both failed because Z"
  4. Ask a specific question — "Why might this deadlock?" not "Help"
  5. Include the tech stack — "Node 20, TypeScript, PostgreSQL 16, Express"

DON'T:

  • Paste entire files (token waste, dilutes the question)
  • Ask vague questions ("What's wrong with this code?")
  • Use this as a first resort — try solving it yourself first
  • Blindly trust the response — validate before applying

Question Templates

Debugging a persistent error (use Codex)

bash
python C:/Users/robert/project/ask-coworker/ask-codex.py -s "You are a debugging expert. Be concise and specific." "
I'm getting this error repeatedly and can't figure out why:

ERROR: [paste exact error]

Relevant code:
[10-30 lines of relevant code]

What I've tried:
- [approach 1 and why it failed]
- [approach 2 and why it failed]

Tech stack: [languages, frameworks, versions]
What am I missing?
"

Architecture/design decision (use o3)

bash
llm -m o3 "
I need to decide between these approaches for [problem]:

Option A: [description]
Option B: [description]

Context: [constraints, scale, team size, timeline]
Give pros/cons and a clear recommendation.
"

Alternative approach brainstorm

bash
python C:/Users/robert/project/ask-coworker/ask-codex.py -m gpt-5.2-codex -s "Suggest 3 completely different approaches. Be creative." "
Current approach: [what you're doing]
Problem: [why it's not working]
Constraints: [what must be preserved]
"

Interpreting Responses

After receiving a co-worker's response:

  1. Don't blindly apply suggestions — evaluate them critically
  2. Look for the insight, not the exact code — the value is the different perspective
  3. If the suggestion doesn't work, that's fine — it may still have shifted your thinking
  4. Report back to the user — summarize what the co-worker suggested and your assessment:
    • "I asked Codex about the deadlock. It suggested [X]. I think that's worth trying because [Y]."
    • "o3 suggested [approach], but I don't think it applies here because [reason]. However, it did make me realize [insight]."

Checking Available Models

bash
# List all installed models
llm models list

# Check which keys are configured
llm keys list

Troubleshooting

IssueFix
"API key not found"Run llm keys path to find keys.json, verify keys exist
Timeout on long responsesAdd --no-stream flag for llm, or increase timeout in ask-codex.py
Want to review past queriesllm logs list shows conversation history (llm only, not codex)