Ten tested Agent Teams prompts for Claude Code. Parallel code review, debugging, feature builds, architecture calls, and campaign research. Paste and go.
Problem: Agent Teams is enabled, and "spin up a team to help on my project" gives you a mess. The gap between a tight team and a token fire comes down to how the prompt is shaped. A productive team has specific roles, clear file boundaries, and a defined finish line. A bad one has three reviewers doing overlapping work.
Quick Win: Try the parallel code review prompt first (pattern #1 below). It is the most broadly useful Agent Teams pattern and runs on any codebase. Three reviewers, three lenses, one synthesised review. You will see the output in minutes, and it catches things a single reviewer would have missed.
This is a companion to the Agent Teams overview. Start there if you have not set up your first team. For controls and configuration, jump to Advanced Controls. The ten prompts below cover the workflows where parallel execution with active coordination beats serial work.
Create an agent team to review PR #142. Spawn three reviewers:- One focused on security implications- One checking performance impact- One validating test coverageHave them each review and report findings. Use delegate mode so thelead synthesizes a final review without doing its own analysis.
Why it works: One reviewer drifts toward one kind of issue at a time. Splitting the criteria into independent domains means security, performance, and test coverage each get a full pass at once. The lead stitches everything into a review that catches problems no single reviewer would. Three-reviewer teams consistently surface issues that single-pass reviews drop. Expect roughly 2-3x the token cost of a single-session review. Worth it for the coverage.
Delegate mode matters. Without it, the lead tends to run its own review and awkwardly mash it into the teammates' results. With delegate mode on, the lead focuses entirely on coordination and synthesis.
Users report the app exits after one message instead of staying connected.Spawn 5 agent teammates to investigate different hypotheses. Have them talkto each other to try to disprove each other's theories, like a scientificdebate. Update the findings doc with whatever consensus emerges.
Why it works: A debate structure beats anchoring bias. Sequential investigation gets stuck on the first plausible theory and ends up trying to confirm it. Multiple independent investigators actively trying to disprove each other means the theory that survives is closer to the real root cause.
This pattern also surfaces unexpected links. When teammate #3 finds a memory leak and teammate #1 was chasing timeout behaviour, they can connect the dots directly. No lead in the middle. That direct channel is what separates Agent Teams from subagent patterns.
Create an agent team to implement the user notifications system.Spawn four teammates:- Backend: Create the notification service, database schema, and API endpoints- Frontend: Build the notification bell component, dropdown, and read/unread states- Tests: Write integration tests for the full notification flow- Docs: Update the API documentation and add usage examplesAssign each teammate clear file boundaries. Backend owns src/api/notifications/and src/db/migrations/. Frontend owns src/components/notifications/.Tests own tests/notifications/. No file overlap.
Why it works: File-level boundaries kill merge conflicts. Each teammate knows which directories they own, and the shared task list keeps everyone on the same page. The moment the backend teammate lands the API contract, the frontend teammate picks it up. They're both watching the same list.
Without explicit boundaries, two teammates will edit the same file and crash into each other. Directory-level ownership is the single most important detail in an implementation prompt. This pattern maps directly to the wave execution model in the workflow guide, where upstream contracts feed into parallel agent spawn prompts.
Create an agent team to evaluate database options for our new analytics feature.Spawn three teammates, each advocating for a different approach:- Teammate 1: Argue for PostgreSQL with materialized views- Teammate 2: Argue for ClickHouse as a dedicated analytics store- Teammate 3: Argue for keeping everything in the existing MongoDBHave them challenge each other's arguments. Focus on: query performanceat 10M+ rows, operational complexity, migration effort, and cost.The lead should synthesize a decision document with the strongest argumentsfrom each side.
Why it works: Deliberation beats a single agent weighing options on its own. Each teammate commits to one position and goes looking for cracks in the others. The lead writes up only the arguments that survive the challenge.
This one is especially useful for decisions where every option has real trade-offs and no obvious winner. A single session tends to pick one early and rationalise it into the answer. The adversarial structure forces genuine evaluation of every alternative.
Create an agent team to identify performance bottlenecks in the application.Spawn three teammates:- One profiling API response times across all endpoints- One analyzing database query performance and indexing- One reviewing frontend bundle size and rendering performanceHave them share findings when they discover something that affectsanother teammate's domain (e.g., slow API caused by missing DB index).
Why it works: Cross-domain communication is where Agent Teams beat subagents. When the database analyst spots a missing index that explains the API teammate's slow endpoint, they pass it on directly. Subagents can't do that, because subagents only report back to the main session and never talk to each other.
A performance hunt also benefits from the shared task list. As each teammate logs issues with severity ratings, the lead watches the picture form in real time and redirects effort toward the worst offenders.
Create an agent team to classify our product catalog. We have 500 itemsthat need categorization, tagging, and description updates.Spawn 4 teammates, each handling a segment:- Teammate 1: Items 1-125- Teammate 2: Items 126-250- Teammate 3: Items 251-375- Teammate 4: Items 376-500Use the classification schema in docs/taxonomy.md. Have teammatesflag edge cases for the lead to review.
Why it works: Data-parallel work scales linearly with teammates. Each works through their slice independently, flagging ambiguous items for a human pass. Four teammates processing 125 items each lands roughly 4x faster than one session processing 500.
The same pattern fits any bulk operation. Tagging support tickets, categorising doc pages, normalising database records, chewing through CSV files. The key is splitting by data boundaries, not by function.
Agent Teams are not only for code. Anything that benefits from parallel perspectives and tight coordination is on the table. The prompts below cover research, content, and campaign strategy.
Create an agent team to research the launch strategy for [product].Spawn three teammates:- Competitor analyst: study competitor ad copy, positioning, and pricing- Voice of customer researcher: mine reviews, Reddit threads, and forums for pain points and language customers actually use- Positioning stress-tester: take findings from both teammates and pressure-test our current positioning against what they discoverHave them share findings and challenge each other. The lead synthesizesa strategy document with positioning recommendations.
Why it works: The competitor researcher finds market gaps. The voice-of-customer teammate checks whether real buyers actually care about those gaps. The positioning stress-tester takes both inputs and tries to break your message with them. Three lenses, one synthesis, each teammate's output feeding the others.
Compare this to three separate research sessions. You'd end up with three independent reports and then spend time cross-referencing them by hand. Agent Teams do the cross-referencing automatically through inter-agent messaging.
Create an agent team to build the landing page for [offer].Spawn three teammates:- Copywriter: develop messaging, headlines, and body copy- CRO specialist: design conversion structure, CTA placement, and flow- Skeptical buyer: review everything as a resistant prospect, flag weak claims, missing proof, and friction pointsRequire plan approval before any implementation.
Why it works: Plan approval catches bad directions before they burn cycles. The adversarial reviewer finds holes the builder-focused teammates glide past. Real buyers are skeptical. Your team should be too.
Plan approval matters most here because landing page copy is expensive to rewrite. Catching a weak value proposition at the outline stage takes minutes. Catching it after a full build takes hours.
Spawn 4 teammates to explore different hook angles for [product].Each teammate develops one direction with headline variations,supporting copy, and a rationale for why the angle works.Have them debate which direction is strongest.Update findings doc with consensus and runner-up options.
Why it works: One agent exploring alone anchors on the first decent idea. Four agents actively trying to outperform each other produce battle-tested creative. The debate structure means the winning angle survived a real challenge, not a single session's internal monologue.
This pattern produces angles no single session would have explored. When teammate #2 pushes back on teammate #1's approach, teammate #1 often refines their angle into something stronger rather than dropping it. Competitive pressure raises the quality floor.
Create a team for this week's content calendar.Spawn three teammates:- Researcher: identify search intent gaps and competitive opportunities- Writer: draft content based on research findings- Quality reviewer: run each piece through clarity, proof, and SEO checksChain tasks so the researcher finishes before the writer starts,and the reviewer checks each piece before marking it complete.
Why it works: Parallel research and sequential quality gates. The researcher and writer can overlap on different pieces while the reviewer catches issues before anything ships. Built-in QA without a separate review process.
Task chaining is the key detail. Without it, all three teammates start at the same time and the writer drafts blind without research to draw from. Explicit task dependencies through the shared task list enforce the right execution order. For more on chaining tasks across agents, see async workflows.
New to Agent Teams? Start simple and build up. Jumping straight into a five-teammate implementation prompt is a recipe for confusion. This three-week progression builds intuition for when teams add value and when they add overhead.
Pick a PR that needs review. Enable Agent Teams, then run:
Create an agent team to review PR #142. Spawn three reviewers:- One focused on security implications- One checking performance impact- One validating test coverageHave them each review and report findings.
Three reviewers, three lenses, one review. You will watch teammates work through the task list, trade findings, and deliver results. Low risk, high learning. Worst case, you get an incomplete review you finish manually.
Grab a bug report and run the competing hypotheses pattern:
Users report intermittent 500 errors on the checkout endpoint.Spawn 3 teammates to investigate different hypotheses:- One checking database connection pooling- One investigating race conditions in the payment flow- One analyzing server resource limitsHave them share findings and challenge each other's theories.
This teaches you how inter-agent communication actually works in practice. Watch how teammates share evidence, how they push back on weak theories, and how consensus forms. The shared task list is where most of this coordination becomes visible.
Once the coordination patterns feel natural, try a feature build with clear file boundaries:
Create an agent team to build the webhook system.Assign directory-level ownership to prevent conflicts.Use delegate mode for the lead.
By week three you will have a feel for when teams pay for themselves and when a single session or subagent approach is the better call. Most developers find that teams work best for tasks needing three or more independent work streams with at least some cross-domain communication.
Keep team size to 3-5: More teammates means more coordination overhead and higher token costs. Past five, communication volume eats the parallelism gain.
Match the pattern to the task: Data-parallel work (classification, processing) splits by data boundaries. Functional work (feature implementation) splits by domain. Evaluative work (architecture decisions, creative) splits by perspective.
Speed up the lead with fast mode: Turn fast mode on for the lead for snappier coordination while teammates run at standard speed to keep costs down.
Agent Teams Prompt Templates | Build This Now
For best practices, troubleshooting, and known limitations, see Agent Teams Best Practices. For display modes, token cost management, and quality gate hooks, see Advanced Controls.
These prompts work as-is for any Claude Code user with Agent Teams enabled. Start with the code review prompt this week. The overhead is low, and every prompt above is a tested starting point for a workflow you already run.