Build This Now
Build This Now
What Is Claude Code?Claude Code InstallationClaude Code Native InstallerYour First Claude Code Project
Agent FundamentalsBackground Agents in Claude CodeSub-Agent RoutingSub-Agent Design in Claude CodeClaude Code Task DistributionBuilder-Validator Agent TeamsClaude Code Agent TeamsAgent Teams ControlsAgent Teams Prompt TemplatesAgent Teams Best PracticesAgent Teams WorkflowCustom AgentsAgent PatternsHuman-like Agents
Get Build This Now
speedy_devvkoen_salo
Blog/Handbook/Agents/Agent Teams Prompt Templates

Agent Teams Prompt Templates

Ten tested Agent Teams prompts for Claude Code. Code review, debugging, feature builds, architecture calls, and campaign research. Paste and go.

Problem: Agent Teams is enabled, and "spin up a team to help on my project" gives you a mess. The gap between a tight team and a token fire comes down to how the prompt is shaped. A productive team has specific roles, clear file boundaries, and a defined finish line. A bad one has three reviewers doing overlapping work.

Quick Win: Try the parallel code review prompt first (pattern #1 below). It is the most broadly useful Agent Teams pattern and runs on any codebase. Three reviewers, three lenses, one synthesised review. You will see the output in minutes, and it catches things a single reviewer would have missed.

This is a companion to the Agent Teams overview. Start there if you have not set up your first team. For controls and configuration, jump to Advanced Controls. The ten prompts below cover the workflows where parallel execution with active coordination beats serial work.

Code Team Patterns

1. Parallel Code Review

Create an agent team to review PR #142. Spawn three reviewers:
- One focused on security implications
- One checking performance impact
- One validating test coverage
Have them each review and report findings. Use delegate mode so the
lead synthesizes a final review without doing its own analysis.

Why it works: One reviewer drifts toward one kind of issue at a time. Splitting the criteria into independent domains means security, performance, and test coverage each get a full pass at once. The lead stitches everything into a review that catches problems no single reviewer would. Three-reviewer teams consistently surface issues that single-pass reviews drop. Expect roughly 2-3x the token cost of a single-session review. Worth it for the coverage.

Delegate mode matters. Without it, the lead tends to run its own review and awkwardly mash it into the teammates' results. With delegate mode on, the lead focuses entirely on coordination and synthesis.

2. Debugging with Competing Hypotheses

Users report the app exits after one message instead of staying connected.
Spawn 5 agent teammates to investigate different hypotheses. Have them talk
to each other to try to disprove each other's theories, like a scientific
debate. Update the findings doc with whatever consensus emerges.

Why it works: A debate structure beats anchoring bias. Sequential investigation gets stuck on the first plausible theory and ends up trying to confirm it. Multiple independent investigators actively trying to disprove each other means the theory that survives is closer to the real root cause.

This pattern also surfaces unexpected links. When teammate #3 finds a memory leak and teammate #1 was chasing timeout behaviour, they can connect the dots directly. No lead in the middle. That direct channel is what separates Agent Teams from subagent patterns.

3. Full-Stack Feature Implementation

Create an agent team to implement the user notifications system.
Spawn four teammates:
- Backend: Create the notification service, database schema, and API endpoints
- Frontend: Build the notification bell component, dropdown, and read/unread states
- Tests: Write integration tests for the full notification flow
- Docs: Update the API documentation and add usage examples

Assign each teammate clear file boundaries. Backend owns src/api/notifications/
and src/db/migrations/. Frontend owns src/components/notifications/.
Tests own tests/notifications/. No file overlap.

Why it works: File-level boundaries kill merge conflicts. Each teammate knows which directories they own, and the shared task list keeps everyone on the same page. The moment the backend teammate lands the API contract, the frontend teammate picks it up. They're both watching the same list.

Without explicit boundaries, two teammates will edit the same file and crash into each other. Directory-level ownership is the single most important detail in an implementation prompt. This pattern maps directly to the wave execution model in the workflow guide, where upstream contracts feed into parallel agent spawn prompts.

4. Architecture Decision Record

Create an agent team to evaluate database options for our new analytics feature.
Spawn three teammates, each advocating for a different approach:
- Teammate 1: Argue for PostgreSQL with materialized views
- Teammate 2: Argue for ClickHouse as a dedicated analytics store
- Teammate 3: Argue for keeping everything in the existing MongoDB

Have them challenge each other's arguments. Focus on: query performance
at 10M+ rows, operational complexity, migration effort, and cost.
The lead should synthesize a decision document with the strongest arguments
from each side.

Why it works: Deliberation beats a single agent weighing options on its own. Each teammate commits to one position and goes looking for cracks in the others. The lead writes up only the arguments that survive the challenge.

This one is especially useful for decisions where every option has real trade-offs and no obvious winner. A single session tends to pick one early and rationalise it into the answer. The adversarial structure forces genuine evaluation of every alternative.

5. Bottleneck Analysis

Create an agent team to identify performance bottlenecks in the application.
Spawn three teammates:
- One profiling API response times across all endpoints
- One analyzing database query performance and indexing
- One reviewing frontend bundle size and rendering performance

Have them share findings when they discover something that affects
another teammate's domain (e.g., slow API caused by missing DB index).

Why it works: Cross-domain communication is where Agent Teams beat subagents. When the database analyst spots a missing index that explains the API teammate's slow endpoint, they pass it on directly. Subagents can't do that, because subagents only report back to the main session and never talk to each other.

A performance hunt also benefits from the shared task list. As each teammate logs issues with severity ratings, the lead watches the picture form in real time and redirects effort toward the worst offenders.

6. Inventory Classification

Create an agent team to classify our product catalog. We have 500 items
that need categorization, tagging, and description updates.
Spawn 4 teammates, each handling a segment:
- Teammate 1: Items 1-125
- Teammate 2: Items 126-250
- Teammate 3: Items 251-375
- Teammate 4: Items 376-500

Use the classification schema in docs/taxonomy.md. Have teammates
flag edge cases for the lead to review.

Why it works: Data-parallel work scales linearly with teammates. Each works through their slice independently, flagging ambiguous items for a human pass. Four teammates processing 125 items each lands roughly 4x faster than one session processing 500.

The same pattern fits any bulk operation. Tagging support tickets, categorising doc pages, normalising database records, chewing through CSV files. The key is splitting by data boundaries, not by function.

Non-Code Team Patterns

Agent Teams are not only for code. Anything that benefits from parallel perspectives and tight coordination is on the table. The prompts below cover research, content, and campaign strategy.

7. Campaign Research Sprint

Create an agent team to research the launch strategy for [product].
Spawn three teammates:
- Competitor analyst: study competitor ad copy, positioning, and pricing
- Voice of customer researcher: mine reviews, Reddit threads, and forums
  for pain points and language customers actually use
- Positioning stress-tester: take findings from both teammates and
  pressure-test our current positioning against what they discover

Have them share findings and challenge each other. The lead synthesizes
a strategy document with positioning recommendations.

Why it works: The competitor researcher finds market gaps. The voice-of-customer teammate checks whether real buyers actually care about those gaps. The positioning stress-tester takes both inputs and tries to break your message with them. Three lenses, one synthesis, each teammate's output feeding the others.

Compare this to three separate research sessions. You'd end up with three independent reports and then spend time cross-referencing them by hand. Agent Teams do the cross-referencing automatically through inter-agent messaging.

8. Landing Page Build with Adversarial Review

Create an agent team to build the landing page for [offer].
Spawn three teammates:
- Copywriter: develop messaging, headlines, and body copy
- CRO specialist: design conversion structure, CTA placement, and flow
- Skeptical buyer: review everything as a resistant prospect, flag
  weak claims, missing proof, and friction points

Require plan approval before any implementation.

Why it works: Plan approval catches bad directions before they burn cycles. The adversarial reviewer finds holes the builder-focused teammates glide past. Real buyers are skeptical. Your team should be too.

Plan approval matters most here because landing page copy is expensive to rewrite. Catching a weak value proposition at the outline stage takes minutes. Catching it after a full build takes hours.

9. Ad Creative Exploration

Spawn 4 teammates to explore different hook angles for [product].
Each teammate develops one direction with headline variations,
supporting copy, and a rationale for why the angle works.
Have them debate which direction is strongest.
Update findings doc with consensus and runner-up options.

Why it works: One agent exploring alone anchors on the first decent idea. Four agents actively trying to outperform each other produce battle-tested creative. The debate structure means the winning angle survived a real challenge, not a single session's internal monologue.

This pattern produces angles no single session would have explored. When teammate #2 pushes back on teammate #1's approach, teammate #1 often refines their angle into something stronger rather than dropping it. Competitive pressure raises the quality floor.

10. Content Production Pipeline

Create a team for this week's content calendar.
Spawn three teammates:
- Researcher: identify search intent gaps and competitive opportunities
- Writer: draft content based on research findings
- Quality reviewer: run each piece through clarity, proof, and SEO checks

Chain tasks so the researcher finishes before the writer starts,
and the reviewer checks each piece before marking it complete.

Why it works: Parallel research and sequential quality gates. The researcher and writer can overlap on different pieces while the reviewer catches issues before anything ships. Built-in QA without a separate review process.

Task chaining is the key detail. Without it, all three teammates start at the same time and the writer drafts blind without research to draw from. Explicit task dependencies through the shared task list enforce the right execution order. For more on chaining tasks across agents, see async workflows.

A Three-Week On-Ramp

New to Agent Teams? Start simple and build up. Jumping straight into a five-teammate implementation prompt is a recipe for confusion. This three-week progression builds intuition for when teams add value and when they add overhead.

Week 1: Research and Review

Pick a PR that needs review. Enable Agent Teams, then run:

Create an agent team to review PR #142. Spawn three reviewers:
- One focused on security implications
- One checking performance impact
- One validating test coverage
Have them each review and report findings.

Three reviewers, three lenses, one review. You will watch teammates work through the task list, trade findings, and deliver results. Low risk, high learning. Worst case, you get an incomplete review you finish manually.

Week 2: Debugging with Debate

Grab a bug report and run the competing hypotheses pattern:

Users report intermittent 500 errors on the checkout endpoint.
Spawn 3 teammates to investigate different hypotheses:
- One checking database connection pooling
- One investigating race conditions in the payment flow
- One analyzing server resource limits
Have them share findings and challenge each other's theories.

This teaches you how inter-agent communication actually works in practice. Watch how teammates share evidence, how they push back on weak theories, and how consensus forms. The shared task list is where most of this coordination becomes visible.

Week 3: Implementation

Once the coordination patterns feel natural, try a feature build with clear file boundaries:

Create an agent team to build the webhook system.
Assign directory-level ownership to prevent conflicts.
Use delegate mode for the lead.

By week three you will have a feel for when teams pay for themselves and when a single session or subagent approach is the better call. Most developers find that teams work best for tasks needing three or more independent work streams with at least some cross-domain communication.

What Actually Works

After dozens of Agent Team sessions, these patterns hold up across every workflow above:

  • Be specific about roles: "one on security, one on performance" beats "reviewers." Vague roles produce vague work.
  • Define file boundaries: Directory-level ownership kills merge conflicts. Non-negotiable for implementation tasks.
  • Include success criteria: "Report findings" or "update the decision doc" gives each teammate a finish line.
  • Use delegate mode for pure coordination: Keeps the lead from doing the work itself. The lead's job is synthesis, not production.
  • Require plan approval for risky work: Catches bad directions before they burn tokens. Critical for creative and implementation tasks.
  • Let teammates argue: Friction beats agreement. Debate patterns consistently outperform consensus-seeking ones.
  • Keep team size to 3-5: More teammates means more coordination overhead and higher token costs. Past five, communication volume eats the parallelism gain.
  • Match the pattern to the task: Data-parallel work (classification, processing) splits by data boundaries. Functional work (feature implementation) splits by domain. Evaluative work (architecture decisions, creative) splits by perspective.
  • Speed up the lead with fast mode: Turn fast mode on for the lead for snappier coordination while teammates run at standard speed to keep costs down.

For best practices, troubleshooting, and known limitations, see Agent Teams Best Practices. For display modes, token cost management, and quality gate hooks, see Advanced Controls.

These prompts work as-is for any Claude Code user with Agent Teams enabled. Start with the code review prompt this week. The overhead is low, and every prompt above is a tested starting point for a workflow you already run.

More in this guide

  • Agent Fundamentals
    Five ways to build specialized agents in Claude Code, from sub-agents to .claude/agents/ definitions to perspective prompts.
  • Agent Patterns
    Orchestrator, fan-out, validation chain, specialist routing, progressive refinement, and watchdog. Six ways to wire sub-agents in Claude Code.
  • Agent Teams Best Practices
    Battle-tested patterns for Claude Code agent teams. Troubleshooting, limitations, plan mode quirks, and fixes shipped from v2.1.33 through v2.1.45.
  • Agent Teams Controls
    Stop your agent team lead from grabbing implementation work. Configure delegate mode, plan approval, hooks, and CLAUDE.md for teams.
  • Agent Teams Workflow
    The full Claude Code agent teams workflow. Structured planning, contract chains, and wave execution that ships production code from parallel agents.

Stop configuring. Start building.

SaaS builder templates with AI orchestration.

Get Build This Now

Agent Teams Controls

Stop your agent team lead from grabbing implementation work. Configure delegate mode, plan approval, hooks, and CLAUDE.md for teams.

Agent Teams Best Practices

Battle-tested patterns for Claude Code agent teams. Troubleshooting, limitations, plan mode quirks, and fixes shipped from v2.1.33 through v2.1.45.

On this page

Code Team Patterns
1. Parallel Code Review
2. Debugging with Competing Hypotheses
3. Full-Stack Feature Implementation
4. Architecture Decision Record
5. Bottleneck Analysis
6. Inventory Classification
Non-Code Team Patterns
7. Campaign Research Sprint
8. Landing Page Build with Adversarial Review
9. Ad Creative Exploration
10. Content Production Pipeline
A Three-Week On-Ramp
Week 1: Research and Review
Week 2: Debugging with Debate
Week 3: Implementation
What Actually Works

Stop configuring. Start building.

SaaS builder templates with AI orchestration.

Get Build This Now