Build This Now
Build This Now
What Is Claude Code?Claude Code InstallationClaude Code Native InstallerYour First Claude Code Project
Terminal as Main ThreadClaude Code Interactive Mode ReferenceClaude Code Voice ModeClaude Code Diff ReviewClaude Code Monitor ToolClaude Code Routines
Get Build This Now
speedy_devvkoen_salo
Blog/Handbook/Core/Claude Code Monitor Tool

Claude Code Monitor Tool

Monitor turns a Claude Code session event-driven so a running process can wake Claude the moment something breaks.

Problem: Until this month, anything running in the background was opaque to Claude Code. A command kicked off with run_in_background returned exactly one ping at the end. Nothing along the way. The workaround was /loop or a scheduled task, which meant firing a fresh prompt every few minutes and paying for a full API call just to ask whether anything had actually moved.

Quick Win: Point Claude at your dev server and let it listen. One sentence does it:

Start my dev server and monitor it for errors

Claude boots the server, wraps its output in a background watcher, and stays quiet until something actually breaks. No tokens burned while nothing is happening.

What Changed

Anthropic shipped the Monitor tool on April 9, 2026. Claude Code PM Noah Zweben announced it this way: "Claude can now follow logs for errors, poll PRs via script, and more. Big token saver and great way to move away from polling in the agent loop."

The underlying flip is from a clock trigger to an event trigger. The old model had Claude peeking at state on a timer. The new model has Claude sitting with the output, reacting when something shows up. Think of it as the same jump you make when you stop hitting a database every few seconds and start subscribing to its change stream. The timer model burns compute on nothing. The stream model reacts the instant a change arrives.

Mechanically, the tool is plain. Claude runs a shell command and treats its stdout as a flow of events. Each line wakes the main session. A silent command costs nothing. As soon as your filter matches a line, that line drops into the conversation and Claude starts working on it. The underlying process just keeps running in parallel.

How It Works

Every monitor takes four parameters:

ParameterWhat It Controls
descriptionShort label shown in every notification ("errors in deploy.log")
commandShell script whose stdout is the event stream
timeout_msKill after this many ms (default 300,000 / 5 min, max 3,600,000 / 1 hr)
persistentRuns for the full session. Stop manually with TaskStop

command is where the logic lives. One line of stdout equals one notification. If several lines come out inside a 200 millisecond window, they get grouped into a single alert, so the multi-line output from one real event still reads as one thing. Stderr is redirected to a file you can read later and does not wake Claude.

Reach for persistent: true when the monitor is supposed to live as long as your session. Dev server watchers, log tailers, PR monitors. For anything bounded, like a single deploy or one test run, pass timeout_ms and the monitor dies when the window closes.

Two Shapes of Monitors

Stream filters watch continuous output and surface matching lines:

# Watch application logs for errors
tail -f /var/log/app.log | grep --line-buffered "ERROR"
 
# Watch file system changes
inotifywait -m --format '%e %f' /watched/dir
 
# Node script that emits events from a WebSocket
node watch-for-events.js

Poll-and-if filters check a source on an interval and emit when conditions change:

# Poll GitHub PR for new comments every 30 seconds
last=$(date -u +%Y-%m-%dT%H:%M:%SZ)
while true; do
  now=$(date -u +%Y-%m-%dT%H:%M:%SZ)
  gh api "repos/owner/repo/issues/123/comments?since=$last" \
    --jq '.[] | "\(.user.login): \(.body)"'
  last=$now; sleep 30
done

Either shape follows the same contract. Stdout is a stream of events. Silence means nothing to report. Claude keeps working on whatever you asked for next while the monitor ticks along in the background.

The Token Math

Picture /loop checking a test suite every 2 minutes during a 10-minute run. That works out to 5 full API calls. Context reload, prompt processing, response generation, every time. You pay for 5 round trips and 4 of them return nothing useful.

A monitor inverts that equation. Claude tails the test runner's filtered output. When test #23 fails at minute 4, the failure line arrives in the session immediately. Claude can start working on the fix while tests 24 through 47 are still churning. Nothing wasted. Nothing delayed. The longer the workflow, the bigger the saving. Anything long-running wins here, including CI runs that last hours, deploy jobs, and overnight builds.

Practical Use Cases

Dev server error catching. Watch a Next.js or Vite dev server and get a ping the instant a build error or crash loop appears. Scrolling through old terminal output looking for the break point goes away.

Test suite triage. A failing test surfaces the instant it fails. Claude can open a fix while the rest of the suite is still running.

Deploy pipeline watching. Attach a filter to your CI/CD stream and wake Claude on failures, on warnings, or when a given deployment stage finishes.

PR review polling. Sit on a GitHub pull request and react to new comments, review requests, or status checks as they land. Each new comment becomes something Claude can work with right away.

Log monitoring. Point a filter at your production or staging logs. Whenever a line matches the pattern, it lands in the session as an event and Claude can respond right away.

Three Rules for Good Monitors

  1. Never pipe through grep without --line-buffered. Without that flag, pipe buffering can sit on events for minutes. By far the most frequent trip-up.

  2. Swallow transient failures inside poll loops. Tack || true onto each API call so one bad network round trip does not take the whole monitor down with it.

  3. Keep stdout tight. Every line you emit turns into a message inside the conversation. A watcher that emits too many events gets auto-stopped by Claude Code. Always filter raw logs before you pipe them in. Never dump a full stream unfiltered.

# Good: selective filter
tail -f app.log | grep --line-buffered "ERROR\|WARN\|FATAL"
 
# Bad: firehose that will auto-stop
tail -f app.log

Poll intervals matter too. 30 seconds or more for anything remote, because rate limits apply. 0.5 to 1 second works fine for local checks.

Monitor vs. Hooks vs. Scheduled Tasks

There are now three layers of automation inside Claude Code, and each one listens for a different kind of trigger.

ToolTriggerBest For
HooksTool eventsValidation before/after tool calls
Scheduled TasksTimeRecurring work on a fixed cadence
MonitorEventsReacting to real-time output

Hooks wake up off Claude's own actions, things like a file edit or a commit. Scheduled tasks wake up off the clock. Monitor wakes up off the outside world. The strongest setups layer all three. Guardrails sit on hooks, periodic maintenance sits on scheduled tasks, and live observability of everything else sits on monitor.

Monitor vs. run_in_background

People mix these up. Both run things in the background. The difference is the feedback model.

run_in_background is fire-and-forget. You get exactly one notification when the command exits. Nothing in between. Good for "run this build and tell me when it is done."

A monitor is a live stream instead. A notification fires for every matching event along the way. Good for "run this build and tell me the moment something goes wrong." Claude stays reactive while the process is still running, not just after it exits.

Next Steps

  • Learn how hooks complement monitors by validating Claude's own actions
  • Explore autonomous agent loops where monitors provide the feedback signal
  • Read about context management to keep long monitored sessions efficient
  • See scheduled tasks for time-based automation that pairs with event-driven monitoring
  • Try feedback loops to tighten the cycle between writing code and catching issues

The Monitor tool sits in the missing slot between "Claude runs things" and "Claude watches things." Background work used to be a sealed box that only cracked open at the very end. Today Claude sits alongside the process, reacting to whatever deserves attention and staying hands-off on whatever doesn't. Pair programming of a genuinely different shape.

More in this guide

  • Agent Fundamentals
    Five ways to build specialized agents in Claude Code, from sub-agents to .claude/agents/ definitions to perspective prompts.
  • Agent Patterns
    Orchestrator, fan-out, validation chain, specialist routing, progressive refinement, and watchdog. Six ways to wire sub-agents in Claude Code.
  • Agent Teams Best Practices
    Battle-tested patterns for Claude Code agent teams. Troubleshooting, limitations, plan mode quirks, and fixes shipped from v2.1.33 through v2.1.45.
  • Agent Teams Controls
    Stop your agent team lead from grabbing implementation work. Configure delegate mode, plan approval, hooks, and CLAUDE.md for teams.
  • Agent Teams Prompt Templates
    Ten tested Agent Teams prompts for Claude Code. Code review, debugging, feature builds, architecture calls, and campaign research. Paste and go.

Stop configuring. Start building.

SaaS builder templates with AI orchestration.

Get Build This Now

Claude Code Diff Review

Four keyboard shortcuts gate every file change Claude Code proposes: y, n, d, and e.

Claude Code Routines

Saved prompts that run on Anthropic's cloud, triggered by a schedule, an API call, or a GitHub event, with zero local dependencies.

On this page

What Changed
How It Works
Two Shapes of Monitors
The Token Math
Practical Use Cases
Three Rules for Good Monitors
Monitor vs. Hooks vs. Scheduled Tasks
Monitor vs. run_in_background
Next Steps

Stop configuring. Start building.

SaaS builder templates with AI orchestration.

Get Build This Now