A StatusLine-driven Claude Code context backup hook. Writes structured snapshots every 10K tokens so auto-compaction never eats errors, signatures, decisions.
Problem: Auto-compaction is a one-shot event. Four hours into a build, context passes 83%, and the summariser fires. The next turn, Claude has a gist of what you did. The exact error strings, function signatures, and the two reasons you threw out the first approach are gone.
The summary gets the outline right. The details don't survive.
Quick Win: Save a structured backup starting at 50K tokens used, then write a new one every 10K tokens after that. On top of that, percentage triggers at 30%, 15%, and 5% free act as a backstop for smaller windows. When compaction finally fires, you've got a markdown file with every user request, every file edit, and every decision you care about.
Most Claude Code hooks don't see context metrics. PreToolUse, PostToolUse, Stop - none of them know how full the window is.
StatusLine is the exception. Every turn it receives a JSON payload that includes context_window.remaining_percentage, so you get live numbers on how much room is left.
Here's the part that trips people up. The remaining_percentage field includes a fixed 33K-token autocompact buffer that you can't actually use. The implementation accounts for this with a token-based calculation rather than a percentage:
This rail works the same regardless of window size. On a 1M window, the first backup fires at 5% usage. On a 200K window, it fires at 25% usage. Either way, you get early coverage.
Think of this rail as the safety net. It catches cases the token rail might miss, like sessions that open with a large prompt injection. On 200K windows the two rails overlap. On 1M windows the token rail is the one that fires first, because hitting 30% remaining already means you've burned 670K+ tokens.
This file handles everything about creating backups:
Transcript parsing: Reads the JSONL transcript file and extracts user requests, file modifications, tasks, and Claude's key responses
Markdown formatting: Structures the data as a readable markdown file
File operations: Saves numbered backups with timestamps
State management: Tracks which session is active and what the current backup path is
The key insight: backups should be structured, not raw dumps. Markdown groups information logically so you can find what you need fast when recovering.
PreCompact hooks fire right before compaction happens. That's your last chance to capture state. This file triggers backup-core with precompact_auto or precompact_manual as the trigger reason.
Think of it as the emergency backup. StatusLine-based thresholds are proactive. PreCompact is reactive, but still much better than losing everything.
# Session Backup**Session ID:** abc123...**Trigger:** tokens_60k_update**Context Remaining:** 94.0%**Generated:** 2026-01-26T17:45:00.000Z## User Requests- Create two blog posts about context management- Add the new post to blog-structure.ts- Fix the internal linking## Files Modified- apps/web/src/content/blog/guide/mechanics/context-buffer-management.mdx- apps/web/src/content/blog/tools/hooks/context-recovery-hook.mdx- apps/web/src/content/blog/blog-structure.ts## Tasks### Created- **Write Post 1: Context Buffer Management**- **Write Post 2: Context Recovery Hook**### Completed- 2 tasks completed## Skills Loaded- content-writer
Not a raw transcript. A structured summary that tells you what happened, what changed, and what's still pending.
StatusLine shows backup path: You see exactly which file has your latest backup
Run /clear: Start a fresh session (cleaner than continuing with compacted context)
Load the backup: Read the markdown file to restore context
Continue work: Claude now has structured context about what you were doing
Working with compacted context means Claude has a summary of the session but has lost the specifics. Loading a structured backup gives you those specifics back.
Keeping both can confuse things. The summary might contradict details in the backup. Starting fresh with /clear and loading only the backup gives cleaner, more reliable context.
The backup system parses Claude Code's JSONL transcript files to pull out meaningful data. Here's what it captures:
Data Type
How It's Extracted
User Requests
Messages where type === "user"
Files Modified
Write/Edit tool calls with file_path
Tasks Created
TaskCreate tool calls
Tasks Completed
TaskUpdate with status === "completed"
Sub-Agent Calls
Task tool invocations
Skills Loaded
Skill tool calls
MCP Tool Usage
Tool names starting with mcp__
Build/Test Runs
Bash commands containing build/test/npm/pnpm
The parser drops the noise. Tool results, system messages, and single-character inputs get filtered out, so you're left with what actually matters for session recovery.
You could copy important context into a file by hand as you work. You won't. You're focused on the implementation, not on documentation.
The token-based system runs on its own. Starting at 50K tokens used, you get a backup every 10K tokens without thinking about it. Cognitive load is zero.
And the backups are already structured. Not a raw paste of conversation, but organised sections you can scan in seconds.
Auto-compaction is the default because most users never set up a backup system. But if you live in long, multi-hour sessions where precision matters, a token-based backup gives you much better recovery options. On a 1M context window, you'll end up with dozens of snapshots captured throughout the session instead of losing everything to a single compaction event.