Build This Now
Build This Now
What Is Claude Code?Claude Code InstallationClaude Code Native InstallerYour First Claude Code Project
Claude Code Best PracticesClaude Code on a VPSGit IntegrationClaude Code ReviewClaude Code WorktreesClaude Code Remote ControlClaude Code ChannelsClaude Code Scheduled TasksClaude Code PermissionsClaude Code Auto ModeFeedback LoopsTodo WorkflowsClaude Code TasksProject TemplatesClaude Code Pricing and Token Usage
Get Build This Now
speedy_devvkoen_salo
Blog/Handbook/Workflow/Claude Code Auto Mode

Claude Code Auto Mode

How auto mode evaluates each tool call with a background AI reviewer, what it blocks by default, and the setup that actually makes it usable.

Problem: Every Claude Code user burns out on permission prompts. You're three files into a refactor, Claude needs npm test, and a modal drops in front of your work. Approve. File read. Approve. Migration write. Approve. After thirty prompts you're not reading them anymore. You're just clicking.

The other option was --dangerously-skip-permissions. That flag pulls every safety rail out. Fine inside a container. On your laptop, with SSH keys and .env files and git credentials sitting right there? Not an option any adult should pick.

Auto mode is the middle path. It shipped March 24, 2026, and it works by running a second AI in the background. Every tool call Claude wants to make gets inspected first. Risky calls get blocked and Claude gets told why. Safe ones run with no prompt at all. The reviewer sits between Claude and your filesystem, and it makes the call you would have made, faster than you could click.

What Auto Mode Really Is

Auto mode is a new permission mode. It slots between default (you review everything) and bypassPermissions (nothing is reviewed). Turn it on and Claude stops showing prompts. Before each tool call actually runs, a separate classifier model looks at the conversation so far and the pending action, then decides pass or block.

Three risk categories drive the decision:

  1. Scope escalation: is the action beyond what you actually asked for?
  2. Untrusted infrastructure: is the target something the classifier has no reason to trust?
  3. Prompt injection: does the action look like it came from hostile content Claude read in a file or webpage?

Pass and the action fires. Block and Claude gets the reason back so it can try a different approach. Your hands stay on the keyboard. The reviewer stays on watch.

How the Classifier Works

Every classifier call runs on Claude Sonnet 4.6, no matter which model your session uses. Input is your user messages plus the pending tool calls. Claude's own prose and prior tool results are stripped out on purpose. Because tool output never lands in the classifier's context, nothing malicious inside a file or page can reach in and flip the decision.

Your CLAUDE.md does get passed through. Project rules feed into what the classifier accepts and refuses. Static allow and deny lists match tool names and arguments like a grep pass. The classifier reads prose and reasons about intent, so it handles cases pattern matching cannot.

Evaluation Order

Each tool call walks a fixed ladder. First match wins:

StepCheckResult
1Matches your allow or deny rulesResolves immediately
2Read-only action or file edit in working directoryAuto-approved
3Everything elseGoes to classifier
4Classifier blocksClaude retries with alternative approach

Your settings.json rules still run first. Bash(npm test) in the allow list executes without the classifier ever waking up. Bash(rm -rf *) in the deny list gets killed before the classifier sees it.

Broad Allow Rules Get Dropped

Here is the gotcha most people miss: the moment you flip into auto mode, Claude Code removes your broad allow rules that hand out arbitrary execution. Anything like Bash(*), Bash(python*), Bash(node*), and every Agent allow rule gets pulled for the duration.

Reason is blunt. If Bash(*) stayed live, the exact commands most likely to hurt you would auto-approve before the classifier ever got a look. The whole feature would be defeated.

Tight rules stay put. Bash(git status) and Bash(npm test) carry over fine. Dropped rules come back when you leave auto mode.

What Gets Blocked vs Allowed

A trust boundary runs through the classifier's view of your system. Your local working directory is trusted. If you're inside a git repo, the configured remotes for that repo are trusted. Anything outside that perimeter counts as external until an admin tells it otherwise.

Blocked by Default

CategoryExamples
Remote code execution`curl
Data exfiltrationSending sensitive data to external endpoints
Production operationsDeploys, migrations, database operations
Mass destructionBulk deletion on cloud storage, rm -rf on pre-existing files
Permission escalationGranting IAM or repo permissions
Infrastructure changesModifying shared infrastructure
Destructive git operationsForce push, pushing directly to main

Allowed by Default

CategoryExamples
Local file operationsReading, writing, editing files in your working directory
Declared dependenciesInstalling packages already in your lock files or manifests
Credential usageReading .env and sending credentials to their matching API
Read-only networkHTTP GET requests, fetching documentation
Branch operationsPushing to your current branch or one Claude created

Pull the full default ruleset the way the classifier reads it:

claude auto-mode defaults

Routine team work sometimes trips the classifier. Pushing to your org's repo, writing to a company bucket. The classifier has no idea those are yours. Admins fix this by configuring trusted infrastructure under the autoMode.environment setting.

How to Enable Auto Mode

Prerequisites

Three things need to be true:

  • Claude Code Team plan (Enterprise and API support rolling out shortly)
  • Claude Sonnet 4.6 or Claude Opus 4.6 (not available on Haiku, claude-3 models, or third-party providers like Bedrock or Vertex)
  • Admin enablement: an admin must enable auto mode in Claude Code admin settings before users can turn it on

CLI

Launch a session that can cycle into auto mode:

claude --enable-auto-mode

Shift+Tab walks through the modes: default ->acceptEdits ->plan ->auto. Your current mode shows in the status bar.

Or jump straight there at launch:

claude --permission-mode auto

VS Code

  1. Open the Claude Code extension settings
  2. Enable Allow dangerously skip permissions (this unlocks auto mode in the UI)
  3. Click the mode indicator at the bottom of the prompt box
  4. Select Auto from the dropdown

Desktop App

  1. Enable auto mode in Desktop settings
  2. Use the mode selector next to the send button
  3. Select Auto from the dropdown

Desktop ships with auto mode off. You turn it on in settings before it ever shows up as an option.

Non-Interactive Mode

Scripted runs work fine with the -p flag:

claude -p "refactor the auth module" --permission-mode auto

One catch. If the fallback kicks in (covered below), non-interactive sessions quit. There is no one at the keyboard to answer a prompt.

Subagents Under Auto Mode

Teams running agent loops and multi-agent setups need to know how this flows through to subagents. Three checkpoints:

At spawn: before the subagent starts, the classifier reads its delegated task description. "Delete all remote branches matching this pattern" gets refused at the door. The subagent never runs.

During execution: inside the child session, auto mode applies the same allow and block rules as the parent. Any permissionMode the subagent set in its own frontmatter is ignored. Every tool call goes through the classifier on its own.

On return: once the subagent finishes, the classifier reads its full action log. A child that looked fine at spawn might have been hijacked mid-run by something it read. If the return pass flags a problem, a security warning gets stuck on the front of the subagent's output so the parent can react.

Three checkpoints means subagents do not need your blind trust. The classifier treats them the same way it treats the main session.

Auto Mode Next to the Other Modes

Here is the full spread across every permission mode Claude Code ships:

defaultacceptEditsplanautodontAskbypassPermissions
Permission promptsFile edits and commandsCommands onlySame as defaultNone (unless fallback)None (blocked unless pre-allowed)None
Safety checksYou review each actionYou review commandsYou review commandsClassifier reviews commandsYour pre-approved rules onlyNone
Token usageStandardStandardStandardHigher (classifier calls)StandardStandard
Best forSensitive workCode iterationCodebase explorationLong-running tasksCI/CD pipelinesIsolated containers only
Risk levelLowestLowLowMediumDepends on rulesHighest

The trade is simple. You pay more tokens and wait a bit longer per checked action. You lose the stream of prompts that turns any long session into a clicking exercise.

When to Pick It

Good fit when:

  • Long tasks where constant approvals break concentration
  • You trust the overall direction but want a net under the rough edges
  • Agent loops with no human nearby to confirm every step
  • You want a safer choice than bypassPermissions outside of a container

Bad fit when:

  • Production infrastructure is in scope (this mode blocks those actions anyway, for good reason)
  • Unfamiliar code where you want eyes on every step
  • Deterministic auditable control matters (reach for dontAsk with explicit allow rules)
  • Cost is tight (classifier calls cost tokens)

The Fallback

False positives should not sink your session, so the fallback catches them. If the classifier blocks 3 in a row or 20 total inside one session, auto mode pauses and Claude Code goes back to asking for approval by hand.

Neither threshold can be tuned.

When it fires:

  • CLI: a note appears in the status area. Approve the next manual prompt and the block counters reset, so you can stay in auto mode after.
  • Non-interactive mode (-p flag): the session exits. No one is there to answer.

Repeat blocks come from one of two places. The task genuinely wants something the classifier is built to stop, or the classifier is missing context about infrastructure you actually own. Use /feedback when it feels like a false positive. If it keeps missing that your repos and services are trusted, get an admin to configure trusted infrastructure in managed settings.

Defense in Depth

One layer is never the whole story. Auto mode gives you more protection than bypassPermissions and less than reviewing every call by hand. The strongest setup stacks:

Layer 1: Permission rules. Allow and deny lists in settings.json resolve before the classifier runs. Use them for hard, deterministic control.

Layer 2: Auto mode classifier. Catches everything the rules do not. Reasons about context, not just text patterns.

Layer 3: Hooks. PreToolUse hooks run custom logic ahead of the permission system. The Permission Hook ships an LLM-powered auto-approver with a three-tier flow (fast approve, fast deny, LLM analysis). Hooks and auto mode coexist: hooks run first and can approve, deny, or escalate before the classifier sees the call.

Layer 4: Sandboxing. OS-level sandboxing walls off filesystem and network access at the kernel. Even when the classifier misses, the sandbox keeps shell commands inside the box you drew. This matters because the classifier reads intent while the sandbox enforces hard walls.

Layer 5: Self-validating agents and stop hooks. These keep agents on task and inside scope, adding another verification pass on top of the permission story.

Every layer fills the gap the others leave. That is defense in depth.

Limitations Worth Knowing

This shipped as a research preview. Be honest about what that word means:

  • No safety guarantee. Ambiguous user intent or missing environment context can cause the classifier to miss a risky action. The reverse happens too (false positives on benign ones).
  • It costs more. Classifier calls count against your token usage. Each checked action sends a slice of the transcript plus the pending call. Most of the extra cost comes from shell commands and network operations, because read-only actions and local file edits skip the classifier entirely.
  • Latency is real. Every check adds a round trip before the action runs. Sequences of fast shell commands feel slower.
  • Narrow availability. Team plan only right now (research preview). Enterprise and API support is rolling out shortly. Sonnet 4.6 or Opus 4.6 required. No Haiku, no claude-3, no third-party providers.
  • Not a substitute for review on sensitive ops. Trust it with work where the direction is solid. For anything touching production, credentials, or shared infrastructure, human review is still the right call.

Calibration improves with data. /feedback is how false positives and missed blocks get reported. Every one of those reports tunes the system.

What's Next

Team-plan users get a new daily workflow out of this. The old trade between safety and speed has a third option now.

For a full security posture around auto mode:

  • Write permission rules for deterministic control on specific tools
  • Configure hooks for custom permission logic past what the classifier handles
  • Turn on sandboxing for OS-level enforcement as a hard backstop
  • Read the settings reference for every permission-related option
  • Explore autonomous agent loops to get the most out of reduced prompting on long runs

The permission prompt is no longer the bottleneck. The classifier is on it. Get back to building.

More in this guide

  • Agent Fundamentals
    Five ways to build specialized agents in Claude Code, from sub-agents to .claude/agents/ definitions to perspective prompts.
  • Agent Patterns
    Orchestrator, fan-out, validation chain, specialist routing, progressive refinement, and watchdog. Six ways to wire sub-agents in Claude Code.
  • Agent Teams Best Practices
    Battle-tested patterns for Claude Code agent teams. Troubleshooting, limitations, plan mode quirks, and fixes shipped from v2.1.33 through v2.1.45.
  • Agent Teams Controls
    Stop your agent team lead from grabbing implementation work. Configure delegate mode, plan approval, hooks, and CLAUDE.md for teams.
  • Agent Teams Prompt Templates
    Ten tested Agent Teams prompts for Claude Code. Code review, debugging, feature builds, architecture calls, and campaign research. Paste and go.

Stop configuring. Start building.

SaaS builder templates with AI orchestration.

Get Build This Now

Claude Code Permissions

Five permission modes, one keystroke to cycle them, and a clean way to match the mode to the task you are on. Here is the full rule syntax and when to use each.

Feedback Loops

Claude Code iterates inside your terminal: write, run, read errors, fix, repeat. One prompt covers the whole loop.

On this page

What Auto Mode Really Is
How the Classifier Works
Evaluation Order
Broad Allow Rules Get Dropped
What Gets Blocked vs Allowed
Blocked by Default
Allowed by Default
How to Enable Auto Mode
Prerequisites
CLI
VS Code
Desktop App
Non-Interactive Mode
Subagents Under Auto Mode
Auto Mode Next to the Other Modes
When to Pick It
The Fallback
Defense in Depth
Limitations Worth Knowing
What's Next

Stop configuring. Start building.

SaaS builder templates with AI orchestration.

Get Build This Now