Context Backup Hooks for Claude Code
A StatusLine-driven backup system that writes structured snapshots every 10K tokens so auto-compaction cannot eat your session detail.
Problem: Auto-compaction is a one-shot event. Four hours into a build, context passes 83%, and the summariser fires. The next turn, Claude has a gist of what you did. The exact error strings, function signatures, and the two reasons you threw out the first approach are gone.
The summary gets the outline right. The details don't survive.
Quick Win: Save a structured backup starting at 50K tokens used, then write a new one every 10K tokens after that. On top of that, percentage triggers at 30%, 15%, and 5% free act as a backstop for smaller windows. When compaction finally fires, you've got a markdown file with every user request, every file edit, and every decision you care about.
StatusLine Is the Only Hook With Live Token Data
Most Claude Code hooks don't see context metrics. PreToolUse, PostToolUse, Stop - none of them know how full the window is.
StatusLine is the exception. Every turn it receives a JSON payload that includes context_window.remaining_percentage, so you get live numbers on how much room is left.
{
"session_id": "abc123...",
"context_window": {
"remaining_percentage": 35.5,
"context_window_size": 200000
}
}No other mechanism in Claude Code gives you real-time visibility. Without it, you're flying blind until compaction hits.
The Buffer Calculation
Here's the part that trips people up. The remaining_percentage field includes a fixed 33K-token autocompact buffer that you can't actually use. The implementation accounts for this with a token-based calculation rather than a percentage:
const AUTOCOMPACT_BUFFER_TOKENS = 33000;
const autocompactBufferPct = (AUTOCOMPACT_BUFFER_TOKENS / windowSize) * 100;
const freeUntilCompact = Math.max(0, pctRemainTotal - autocompactBufferPct);On a 200K window, that 33K buffer is 16.5%. On a 1M window, it's only 3.3%. A fixed token count keeps the math correct across every window size.
The Dual Trigger System
Auto-compaction is reactive. It fires once you're already deep into the window, then throws away detail during summarisation.
A backup system needs to be proactive. Two trigger rails run side by side:
Token-Based Triggers (Primary)
| Trigger | When It Fires | Purpose |
|---|---|---|
| 50K tokens | After 50K total tokens used | First backup, early capture |
| Every 10K | 60K, 70K, 80K, 90K, 100K, ... | Regular updates |
This rail works the same regardless of window size. On a 1M window, the first backup fires at 5% usage. On a 200K window, it fires at 25% usage. Either way, you get early coverage.
Percentage-Based Triggers (Safety Net)
| Threshold | When It Fires | Purpose |
|---|---|---|
| 30% | ~60K tokens free until compact | Additional checkpoint |
| 15% | ~30K tokens free until compact | Getting critical |
| 5% | ~10K tokens free until compact | Last chance before compaction |
| Under 5% | Every context decrease | Continuous backup mode |
Think of this rail as the safety net. It catches cases the token rail might miss, like sessions that open with a large prompt injection. On 200K windows the two rails overlap. On 1M windows the token rail is the one that fires first, because hitting 30% remaining already means you've burned 670K+ tokens.
Three-File Architecture
A production backup system wants clean separation of concerns. Three files do the work:
.claude/hooks/ContextRecoveryHook/
├── backup-core.mjs # Shared backup logic
├── statusline-monitor.mjs # Threshold detection + display
└── conv-backup.mjs # PreCompact hook triggerbackup-core.mjs
This file handles everything about creating backups:
- Transcript parsing: Reads the JSONL transcript file and extracts user requests, file modifications, tasks, and Claude's key responses
- Markdown formatting: Structures the data as a readable markdown file
- File operations: Saves numbered backups with timestamps
- State management: Tracks which session is active and what the current backup path is
The key insight: backups should be structured, not raw dumps. Markdown groups information logically so you can find what you need fast when recovering.
statusline-monitor.mjs
This one runs on every turn via StatusLine. Its job:
- Calculate total tokens used and the true "free until compaction" percentage
- Check token-based triggers (50K first, then every 10K)
- Check percentage thresholds as safety net (30%, 15%, 5%)
- Trigger
backup-corewhen any trigger fires - Display formatted status with backup path
The backup path shows up in the statusline the moment a backup exists for the current session:
[!] Opus 4.6 | 65k / 1m | 6% used 65,000 | 90% free 900,000 | thinking: On
-> .claude/backups/3-backup-18th-Feb-2026-2-15pm.mdThat second line appears as soon as you pass 50K tokens. No waiting until context gets critical.
conv-backup.mjs
PreCompact hooks fire right before compaction happens. That's your last chance to capture state. This file triggers backup-core with precompact_auto or precompact_manual as the trigger reason.
Think of it as the emergency backup. StatusLine-based thresholds are proactive. PreCompact is reactive, but still much better than losing everything.
Configuration
Two settings.json entries do the wiring:
{
"statusLine": {
"type": "command",
"command": "node .claude/hooks/ContextRecoveryHook/statusline-monitor.mjs"
},
"hooks": {
"PreCompact": [
{
"hooks": [
{
"type": "command",
"command": "node .claude/hooks/ContextRecoveryHook/conv-backup.mjs",
"async": true
}
]
}
]
}
}The async: true on PreCompact matters. Backups shouldn't slow the compaction process down.
Backup File Format
Filenames are numbered and use human-readable timestamps:
.claude/backups/1-backup-26th-Jan-2026-4-30pm.md
.claude/backups/2-backup-26th-Jan-2026-5-15pm.md
.claude/backups/3-backup-26th-Jan-2026-5-45pm.mdInside each one, you get a structured summary:
# Session Backup
**Session ID:** abc123...
**Trigger:** tokens_60k_update
**Context Remaining:** 94.0%
**Generated:** 2026-01-26T17:45:00.000Z
## User Requests
- Create two blog posts about context management
- Add the new post to blog-structure.ts
- Fix the internal linking
## Files Modified
- apps/web/src/content/blog/guide/mechanics/context-buffer-management.mdx
- apps/web/src/content/blog/tools/hooks/context-recovery-hook.mdx
- apps/web/src/content/blog/blog-structure.ts
## Tasks
### Created
- **Write Post 1: Context Buffer Management**
- **Write Post 2: Context Recovery Hook**
### Completed
- 2 tasks completed
## Skills Loaded
- content-writerNot a raw transcript. A structured summary that tells you what happened, what changed, and what's still pending.
The Recovery Workflow
When compaction happens:
- StatusLine shows backup path: You see exactly which file has your latest backup
- Run /clear: Start a fresh session (cleaner than continuing with compacted context)
- Load the backup: Read the markdown file to restore context
- Continue work: Claude now has structured context about what you were doing
Working with compacted context means Claude has a summary of the session but has lost the specifics. Loading a structured backup gives you those specifics back.
Why /clear Instead of Continuing?
After compaction, two kinds of context exist side by side:
- Compaction summary: Auto-generated, lossy, captures the gist
- Loaded backup: Structured, detailed, captures specifics
Keeping both can confuse things. The summary might contradict details in the backup. Starting fresh with /clear and loading only the backup gives cleaner, more reliable context.
State Tracking
StatusLine and PreCompact both update a shared state file:
// ~/.claude/claudefast-statusline-state.json
{
"sessionId": "abc123...",
"lastFreeUntilCompact": 25.5,
"lastBackupAtTokens": 60000,
"currentBackupPath": ".claude/backups/3-backup-18th-Feb-2026-2-15pm.md"
}This lets the StatusLine monitor know:
- Which session it's tracking (to reset state when sessions change)
- What the last context percentage was (to detect percentage threshold crossings)
- How many tokens were used at the last backup (to calculate the next 10K interval)
- Where the current backup lives (to display in the statusline)
Transcript Parsing Details
The backup system parses Claude Code's JSONL transcript files to pull out meaningful data. Here's what it captures:
| Data Type | How It's Extracted |
|---|---|
| User Requests | Messages where type === "user" |
| Files Modified | Write/Edit tool calls with file_path |
| Tasks Created | TaskCreate tool calls |
| Tasks Completed | TaskUpdate with status === "completed" |
| Sub-Agent Calls | Task tool invocations |
| Skills Loaded | Skill tool calls |
| MCP Tool Usage | Tool names starting with mcp__ |
| Build/Test Runs | Bash commands containing build/test/npm/pnpm |
The parser drops the noise. Tool results, system messages, and single-character inputs get filtered out, so you're left with what actually matters for session recovery.
Why This Beats Manual Tracking
You could copy important context into a file by hand as you work. You won't. You're focused on the implementation, not on documentation.
The token-based system runs on its own. Starting at 50K tokens used, you get a backup every 10K tokens without thinking about it. Cognitive load is zero.
And the backups are already structured. Not a raw paste of conversation, but organised sections you can scan in seconds.
Comparison: Auto-Compaction vs Threshold Backup
| Aspect | Auto-Compaction | Threshold Backup + /clear |
|---|---|---|
| When it happens | At ~83.5% usage | At 50K tokens, then every 10K |
| What's preserved | Lossy summary | Structured markdown with full detail |
| Control | None (hardcoded) | Configurable token + pct thresholds |
| Recovery | Continue with summary | Load specific backup file |
| Specifics retained | Only what fits summary | Everything in backup |
Auto-compaction is the default because most users never set up a backup system. But if you live in long, multi-hour sessions where precision matters, a token-based backup gives you much better recovery options. On a 1M context window, you'll end up with dozens of snapshots captured throughout the session instead of losing everything to a single compaction event.
Key Takeaways
- StatusLine is the only live context monitor - Other hooks don't get token counts
- Token-based triggers fire early - First backup at 50K used, then every 10K, regardless of window size
- Percentage thresholds are a safety net - 30%, 15%, 5% free catch edge cases on smaller windows
- Raw percentage includes a 33K buffer - Calculate true "free until compact" using token counts
- Structured backups beat raw dumps - Parse transcripts into organised markdown
- Three-file architecture - Clean separation between detection, backup logic, and triggers
- Recovery workflow: /clear + load - Cleaner than mixing compacted context with backup
Related Resources
- Context Buffer Management - Why the 33K-45K buffer exists
- Claude Code Hooks Guide - All 12 hook types explained
- Context Engineering - Strategic context usage
- Session Lifecycle Hooks - Setup and cleanup automation
Stop configuring. Start building.
Claude Code Session Hooks
Four lifecycle hooks for Claude Code. Run init steps on demand, inject project context at every start, back up transcripts, and log cleanup on exit.
Skill Activation Hook
A hook on UserPromptSubmit that appends matching skill recommendations to every prompt before Claude reads it.