Use Claude Opus 4.7 well in Claude Code: first turns, effort settings, adaptive thinking, tool prompting, subagents, session resets, and token control.
Most people upgrade to Opus 4.7 the lazy way. They change the model ID and keep working exactly like they did on Opus 4.6.
That leaves a lot on the table.
Anthropic's own guidance for Opus 4.7 is subtle but important: the model thinks more at higher effort, is more selective about tool calls and subagents, reads instructions more literally, and performs better when you treat it like a capable engineer you are delegating to rather than a chatty pair programmer you steer every thirty seconds.
This page is the practical version of that advice. It combines Anthropic's launch guidance, the Claude Code docs, and the patterns that matter in real engineering workflows.
If you want one immediately better habit with Opus 4.7, use this:
Here is the task, the constraint set, the files that matter, and the definition of done.Do the full job, validate before you report back, and call out missing information instead of guessing.
That single shift matters because Opus 4.7 performs best when the first turn gives it enough room to think, plan, and execute without needing five corrective follow-ups.
That style is expensive on Opus 4.7 because every new user turn adds reasoning overhead and shifts the model into a more interactive loop than it actually wants for hard work.
The better pattern is:
state the job clearly in the first turn
include the real constraints and acceptance criteria
let the model carry the work further before you interrupt
review the result at a meaningful checkpoint instead of micromanaging every step
Bad first turn:
Help me fix auth.
Good first turn:
Fix the OAuth redirect loop where successful login returns users to /login instead of /dashboard.Constraints:- keep the existing session format- do not change provider configuration- update tests if neededRelevant areas:- src/lib/auth.ts- src/middleware.ts- app/login/*Definition of done:- login succeeds- user lands on /dashboard- no redirect loop- tests pass
This is not prompt-engineering theater. It is just a better handoff.
Anthropic's best-practices post on Opus 4.7 keeps coming back to this: if the job is real, give the model the full brief up front.
The first turn should usually include:
the actual task
what success looks like
what must not change
which files, services, or directories matter
any existing references or patterns to match
how the result should be validated
You are trying to eliminate two failure modes:
under-specification: the model has to guess what you meant
turn-by-turn patching: the model keeps paying reasoning cost to incorporate corrections that should have been in the original brief
Good structure:
Task:[what to build, fix, review, or investigate]Constraints:- [what must stay true]- [what must be avoided]Relevant context:- [files, routes, services, tickets, docs]Definition of done:- [observable outcome]- [verification step]
This pattern works for coding, review, security, docs, and multimodal work.
Opus 4.7 added a new xhigh effort tier and Claude Code moved the default there for a reason.
xhigh is the best default for most intelligence-sensitive coding work because it captures most of the upside of deeper reasoning without the worst "runaway thought" behavior that max can trigger on longer tasks.
Opus 4.7 uses adaptive thinking, which means the model decides when to think harder and when to move quickly. That is usually good. It is still steerable.
When you want more thought:
This problem is subtle. Think carefully and step by step before acting.Verify assumptions before you edit anything.
When you want less thought:
Prioritize a direct answer over deep reasoning.Be concise and only inspect additional files if necessary.
Use this sparingly. Do not stack twelve meta-instructions. One or two lines are enough.
Anthropic explicitly says Opus 4.7 uses tools less often by default and reasons more before acting. That is usually an improvement. It also means the model may inspect less than you expect unless you tell it otherwise.
If you want aggressive investigation, say so.
Instead of:
Review this service for bugs.
Use:
Review this service for bugs.Read the relevant implementation files before concluding.Use search and file reads aggressively where needed.Do not rely on assumptions if you can verify them from the codebase.
This matters for:
code review
debugging
security review
large codebase investigation
source-backed writing
The model is not "bad at tools" now. It is simply more selective. Give it the policy you want.
Anthropic also says Opus 4.7 spawns fewer subagents by default. Again, that is usually rational. It is not always what you want.
If the job benefits from parallelism, say so in the first turn.
Example:
Use subagents when the work naturally splits.Spawn multiple subagents in the same turn when fanning out across independent files or domains.Do not spawn a subagent for work you can complete directly in one response.
Good times to force parallelism:
review several independent files
compare several docs or logs
audit different domains separately: frontend, backend, database
read the codebase in parallel before implementation
Bad times to force parallelism:
a single-file fix
tightly coupled edits
tasks where the output of step B depends on step A
Opus 4.7 is more judicious by default. That is fine. You still need to specify your orchestration policy when the workflow depends on it.
One of Boris Cherny's best launch-day tips for Opus 4.7 was operational, not model-level: if the model is going to run longer and more autonomously, the old permission workflow becomes the bottleneck.
If the task is clear and the environment is trusted, auto mode removes most of the "approve, approve, approve" loop while keeping background safety checks in place.
Good fit:
long refactors in a familiar repo
implementation work with a clear definition of done
investigations where you trust the overall direction
Bad fit:
unknown environments
production-sensitive work
vague tasks where the model still needs heavy steering
For the deeper mechanics, see Claude Code Auto Mode and the official permission mode docs.
Boris also called out a new /fewer-permission-prompts skill. The idea is simple: scan what Claude has repeatedly been blocked on, then turn the obviously safe, repetitive commands into explicit permission rules instead of clicking through them forever.
That is a much better Opus 4.7 workflow than either extreme:
manually approving the same harmless command 20 times
jumping straight to full bypass mode
The right target is not "no safety." It is "fewer pointless interruptions."
Claude Code shipped recaps in 2.1.108, right before Opus 4.7. Boris highlighted them for a reason: if you come back to a long-running session after ten minutes or two hours, a short recap is better than trying to reconstruct state from scrollback.
Recaps are especially useful when:
you background a task
you return to a session later in the day
a long run touched many files or phases
you want the "what happened / what's next" summary fast
Think of recaps as session re-entry, not just summarization.
Boris also called out the new focus mode in the CLI. The timing makes sense: once you trust Opus 4.7 to investigate, edit, and verify more independently, transcript detail can become visual noise.
Focus mode is useful when:
you care about the end result more than the live transcript
the model is running a long sequence of commands correctly
you want to review the final state, not watch every intermediate action
That is a workflow change worth making. Stronger models shift the bottleneck from "can it do the work?" to "how much of the work do I actually need to watch?"
This point came through clearly in Boris's thread and it lines up with Anthropic's own guidance: if you want the biggest jump from Opus 4.7, give it a way to check whether it actually succeeded.
One of the best traits in Opus 4.7 is that it is more willing to verify its own work. Help it.
Add explicit validation language:
Before you report done:- verify assumptions you relied on- run the relevant tests- check the final changed files for consistency- list remaining risk, if any
This is especially important for:
migrations
auth changes
concurrency fixes
security review
document-based analysis
The model is more self-checking than earlier versions. You still want "done" to mean something concrete.
Concrete examples of a verification harness:
backend work: a test command, curl check, or typed build
frontend work: a browser check, screenshot diff, or lint/build pass
refactors: compile + test + grep for the old API surface
docs or content: a checklist of claims to verify against source material
Without a verification path, longer autonomy mostly means longer unverified execution. With one, it becomes useful autonomy.
Implement [task].Constraints:- [must preserve]- [must avoid]Relevant files:- [file/path]- [file/path]Working style:- think carefully before acting- verify assumptions from code, not guesses- use subagents only when the work naturally splitsDefinition of done:- [observable outcome]- [test or verification]
Investigate [problem].Use tools and file reads aggressively where needed.Do not guess if the codebase can answer the question.I want:- root cause- files involved- likely fix- edge cases or risks
Review these materials and produce a decision memo.Requirements:- separate facts from interpretation- call out ambiguity explicitly- list what evidence is missing- cite the exact source section when possible