Agent Patterns
Orchestrator, fan-out, validation chain, specialist routing, progressive refinement, and watchdog. Six ways to wire sub-agents in Claude Code.
Problem: Sub-agents work, but you keep reaching for the same shape every time. A handful of parallel calls, fingers crossed, manual glue on whatever comes back. There's a better way to organise agent work, and the shape you pick decides whether the output holds up or falls apart.
Six orchestration patterns keep showing up across real Claude Code work. Each suits a different kind of job. Choose badly and tokens leak, files collide, or answers drift. Choose well and a pile of agents can chew through hard work with almost no supervision.
The Orchestrator Pattern
One central thread runs the show and hands work out to specialists. It never touches code itself. Its whole job is planning, delegating, reviewing, and routing.
When to use: Features that cross frontend, backend, and database. Anything where someone needs to hold the whole picture in their head.
How it works in Claude Code: The primary chat becomes the conductor. It reads the spec, drafts a plan, and then spawns specialists through the Task tool. Results come back, get reviewed, and the next move gets decided on the spot.
You are the orchestrator. For this feature request, create a plan that:
1. Breaks the work into domain-specific tasks
2. Identifies dependencies between tasks
3. Dispatches each task to a sub-agent with explicit file scope
4. Reviews outputs before marking complete
Do NOT write implementation code yourself. Coordinate only.CLAUDE.md is the natural home for this wiring. Routing rules live in that file, and the conductor treats them like a contract. In a solid setup the whole task dependency graph gets worked out first, before any sub-agent writes a single line.
When NOT to use: Small jobs that a single agent can finish in one pass. Adding a utility, fixing a typo. The bookkeeping of orchestration is more expensive than any value it brings back.
The Fan-Out / Fan-In Pattern
Many agents go out in parallel. All of them report back. One thread pulls the results into a single answer.
When to use: Research sweeps. Multi-file analysis. Anything where each agent works on its own inputs with no overlap. Code reviews across independent modules. Gathering intel from different corners of a codebase before any decision gets made.
How it works in Claude Code: Kick off parallel sub-agents by issuing several Task tool calls in the same message. Each one opens its own files or chases its own concern. Once every agent returns, the central thread puts the pieces together.
Complete these 4 tasks using parallel sub-agents:
1. Read src/api/ and list all endpoints missing input validation
2. Read src/auth/ and identify any hardcoded secrets or weak patterns
3. Read src/db/ and check for missing indexes on frequently queried columns
4. Read src/utils/ and flag any functions with no error handling
After all agents report back, synthesize findings into a prioritized action list.The fan-out stage is quick because agents share no state. The fan-in stage is where the money is: the orchestrator catches links between findings that no solo agent could see. A weak auth path plus a gap in input validation may add up to a live exploit, even if neither report reads that way on its own.
When NOT to use: Any job where several agents need to edit overlapping files. Concurrent writes on shared code walk straight into merge conflicts. When the work isn't genuinely independent, a sequential chain is the right call.
The Validation Chain Pattern
A builder agent writes the code. A different validator agent inspects it. Their roles never bleed into each other.
When to use: Production changes. Security-sensitive work. Anywhere a wrong answer costs real money. Reach for this pattern when shipping a bug is simply not on the table.
How it works in Claude Code: Two tasks wired by a dependency. The builder ships code first. Once that's done the validator opens the diff, runs the tests, and files its report, never touching a single file.
TaskCreate(
subject="Build payment webhook handler",
description="Create Stripe webhook handler in src/api/webhooks/stripe.ts.
Handle checkout.session.completed, payment_intent.failed events.
Verify webhook signatures. Include error handling."
)
TaskCreate(
subject="Validate payment webhook handler",
description="Read src/api/webhooks/stripe.ts. Verify:
- Webhook signature verification exists
- Both event types handled with proper responses
- Error handling covers malformed payloads
- No hardcoded secrets
Report issues only. Do NOT modify any files."
)
TaskUpdate(taskId="2", addBlockedBy=["1"])The addBlockedBy link is what keeps the validator honest. It holds until the builder is done, and it walks in cold. No shared assumptions. No inherited blind spots. That's why the check actually catches things.
If the validator spots a problem, it opens a fix task routed to a builder. Then a new validator queues behind that fix. The loop tightens until the output is right.
When NOT to use: Throwaway prototyping where speed outranks correctness. A dedicated validator roughly doubles compute per task. When the code is scratch, one agent gets you there faster, for fewer tokens.
The Specialist Routing Pattern
Route each task to the domain expert that knows the job. Frontend work lands with a frontend specialist. Migrations go to the database expert. Each agent holds its own conventions, instructions, and tool limits.
When to use: Big repos with several clear domains. Teams with settled conventions per layer. Anywhere generic instructions keep giving you mushy, inconsistent output across the codebase.
How it works in Claude Code: Stand up specialist agents in .claude/agents/, or pin routing rules into your CLAUDE.md. The orchestrator inspects the task, tags the domain, and hands it to the right expert.
<!-- In CLAUDE.md -->
## Agent Routing Table
| Task Domain | Route To | File Scope |
| ----------- | ------------------- | -------------------- |
| React/UI | frontend-specialist | src/components/ |
| API routes | backend-engineer | src/api/, src/lib/ |
| Database | database-specialist | src/db/, migrations/ |
| Security | security-auditor | Any (read-only) |
| Tests | quality-engineer | tests/, **tests**/ |Every specialist defined in .claude/agents/ inherits your CLAUDE.md automatically. Coding standards come along for the ride. The agent's own file adds the rest: framework picks, naming rules, error handling shaped to that layer.
Growth is cheap. New domain means a new agent file and a new row in the routing table. Nothing gets rewritten. The custom agents guide walks through defining specialists with YAML frontmatter.
When NOT to use: Tiny codebases where a single agent already has enough context on the whole thing. Routing overhead buys nothing when there's only one destination anyway.
The Progressive Refinement Pattern
Begin with a loose draft. Send it through several rounds of edit. Each round pays attention to a single quality axis.
When to use: Content work, tangled architecture, any job where getting it right in one shot is unrealistic. Works cleanly on blog posts, API schemas, or config files that need to satisfy several constraints at once.
How it works in Claude Code: Queue agents in a sequence where each one reads the previous output and sharpens one aspect of it.
Phase 1 - Draft: "Generate the initial API schema for a task management
system. Include all entities, relationships, and basic validation rules."
Phase 2 - Security review: "Review this schema. Add authentication
requirements, permission checks, and input sanitization rules.
Don't change the core structure."
Phase 3 - Performance review: "Review the schema for performance.
Add indexes, identify N+1 query risks, suggest denormalization
where read performance matters."
Phase 4 - Final validation: "Verify the schema is consistent.
Check that all referenced entities exist, foreign keys are valid,
and naming conventions are uniform."Each phase has narrow eyes. The draft agent cares about completeness. Security adds guardrails. Performance tunes. Validation checks coherence. No agent gets asked to juggle every concern at once.
It mirrors how real developers work. Code never lands production-ready on the first pass. You draft, you fix correctness, you tune performance, you polish. This pattern hands each pass to an agent shaped for it.
When NOT to use: Any job that has to land in a single shot, where layered improvement isn't on offer. Also pointless on simple work where the first draft already clears the bar.
The Watchdog Pattern
A background agent sits and watches for a condition. When it fires, the watchdog reports or acts. It runs next to your main work, never in the way.
When to use: Long sessions where drift creeps in. Keeping an eye on context health, spotting regressions during a refactor, or watching builds go red while your attention is somewhere else.
How it works in Claude Code: Background a watchdog with Ctrl+B and get back to what you were doing. The agent checks its trigger on a cadence. When something trips, results surface straight into your task list.
Background task: Monitor the test suite while I refactor the auth module.
Every time I complete a change, run the test suite for src/auth/.
If any test fails, immediately create a task with:
- Which test failed
- The assertion error
- Which file I likely broke based on the test nameA good example lives in the context recovery hook. It's a watchdog wired into the infrastructure itself, tracking context usage and firing recovery the moment the window starts to fill. Agent-health and progress hooks work the same way.
Watchdogs pay off most when paired with async workflows, where backgrounded agents run on their own and pop results in front of you when you're ready.
When NOT to use: Short sessions where the cost of watching is more than the value of the thing being watched. A two-minute task plus a watchdog polling every thirty seconds is a wash.
Combining Patterns
Real projects almost never stay inside a single pattern. A non-trivial feature often ends up wired like this:
- Orchestrator reads the spec and drafts the plan
- Specialist routing hands tasks to domain experts
- Fan-out runs the independent pieces in parallel
- Validation chains vet each specialist's work
- Progressive refinement polishes the merged result
- Watchdog keeps eyes on the test suite the whole time
The skill isn't memorising the list. It's reading the task in front of you and matching it to the right shape. Reach for the plainest pattern that can carry the load. Add more structure only when that one breaks down.
Next steps:
- Agent Fundamentals, for how sub-agents and slash commands sit together
- Sub-Agent Best Practices, for parallel versus sequential routing calls
- Team Orchestration, for the complete builder and validator loop
- Task Distribution, for keeping multi-agent work organised
- Custom Agents, for writing your own specialist definitions
Stop configuring. Start building.
Custom Agents
Define your own Claude Code specialists with slash commands, .claude/agents/ YAML, and CLAUDE.md rules. Real examples and the mistakes to skip.
Human-like Agents
Personality patterns for Claude Code agents: reasoning out loud, admitting uncertainty, weighing trade-offs, asking follow-up questions.