AgentSkillsCN

juliaz-multi-agent-optimize

优化Julia智能体之间的协同——调度器、前端、OpenClaw、桥接、cowork-mcp。在提升智能体间通信效率、降低延迟、修复路由问题,或优化消息流转时,都可触发此技能。此外,还可针对“智能体互不通信”“消息未送达”“路由出错”“速度过慢”“桥接瓶颈”,或任何多智能体协同相关的问题触发。

SKILL.md
--- frontmatter
name: juliaz-multi-agent-optimize
description: "Optimize coordination between Julia's agents — orchestrator, frontend, OpenClaw, bridge, cowork-mcp. Trigger when improving inter-agent communication, reducing latency, fixing routing issues, or optimizing the message flow. Also trigger for: 'agents not talking', 'message not arriving', 'routing broken', 'too slow', 'bridge bottleneck', or any multi-agent coordination question."

juliaz-multi-agent-optimize — Multi-Agent Coordination

Optimize how Julia's agents work together. Adapted from agent-orchestration-multi-agent-optimize.

Use this skill when

  • Improving coordination between orchestrator, frontend, OpenClaw, or bridge
  • Fixing routing issues (messages not arriving, wrong agent handling request)
  • Reducing latency in the message flow
  • Designing new inter-agent communication paths

Do not use this skill when

  • Optimizing a single agent's prompt or tool (use juliaz-agent-improve or juliaz-tool-builder)
  • Debugging infrastructure (use juliaz-debug)
  • Building a new agent (use juliaz-agent-builder)

juliaz Message Flow Architecture

code
                     ┌─────────────────┐
                     │   Bridge :3001  │
                     │  (message hub)  │
                     └──┬──────────┬───┘
                        │          │
              GET /consume    POST /incoming
              POST /reply     GET /pending-reply
                        │          │
              ┌─────────┴──┐  ┌────┴──────────┐
              │ Orchestrator│  │   OpenClaw    │
              │ (Julia brain)│  │ (Telegram GW) │
              └─────────────┘  └───────────────┘
                     │
              POST /task
                     │
              ┌──────┴──────┐
              │ Cowork-MCP  │
              │ :3003       │
              └─────────────┘

Key Principles

1. Minimal Inter-Agent Communication Overhead

Every hop adds latency. The bridge path (POST → queue → poll → process → POST → poll) adds 5-10 seconds. Only route through the bridge when necessary (actions requiring orchestrator tools). Keep conversational responses direct.

2. Single Orchestrator Brain

Don't duplicate Julia into multiple independent agents with overlapping tools. The orchestrator is the brain. Other interfaces (frontend, Telegram via OpenClaw) should route action requests to it.

3. Reuse Existing Endpoints

The bridge already has POST /incoming, GET /consume, POST /reply, GET /pending-reply/:chatId. Don't create new protocols — use these.

4. Clear Message Routing via chatId Convention

chatId PatternSourceHandler
Numeric (e.g., 8519931474)Telegram via OpenClawOrchestrator processes, replies via bridge
web-<timestamp>Frontend dashboardOrchestrator processes, frontend polls for reply

Optimization Checklist

When adding a new inter-agent path:

  • Does it reuse existing bridge endpoints?
  • Is the chatId convention clear and non-overlapping?
  • Does the receiver know how to identify the source (username, chatId prefix)?
  • Is there a timeout with a clear error message?
  • Does the existing flow (Telegram → bridge → orchestrator → bridge → Telegram) still work?
  • Is the response consumed after delivery (no stale replies)?

Latency Budget

PathExpected LatencyAcceptable
Frontend → direct LLM response1-3s (streaming)Yes
Frontend → bridge → orchestrator → bridge → frontend5-15sYes for actions
Telegram → OpenClaw → bridge → orchestrator → bridge → OpenClaw → Telegram5-15sYes
Any path>45sNo — must timeout

Cost Awareness

ModelCostUsed by
Claude HaikuLowOrchestrator (primary)
GPT-4oMediumOrchestrator (fallback), Frontend (default)
Claude SonnetHigherFrontend (selectable), Cowork-MCP delegation

When routing through the orchestrator, the frontend adds a second LLM call (orchestrator processes the request). This is acceptable for actions but wasteful for simple questions — hence the hybrid approach.