Install a three-tier Claude Code permission hook: instant allow for safe calls, instant deny for dangerous ones, LLM check for the gray area. No skip flag.
Stop configuring. Start building.
SaaS builder templates with AI orchestration.
Problem: A file read needs your approval. Click. A shell command needs your approval. Click. Twenty clicks deep, the feature you started has fallen out of your head.
Quick Win: Run three commands and the Permission Hook takes over:
npm install -g @abdo-el-mobayad/claude-code-fast-permission-hook
cf-approve install
cf-approve configThree commands. Claude now runs without pausing for approval, and the risky calls get caught before they touch your machine. No --dangerously-skip-permissions needed.
Vanilla Claude Code leaves you with two choices, neither of which is good.
Option 1: Click approve constantly. Safe, and flow-killing. A complex feature can mean 50+ permission prompts. Context goes. Momentum goes. Whatever made AI-assisted coding feel useful goes with them.
Option 2: Use --dangerously-skip-permissions. Fast and terrifying. One hallucinated rm -rf / and the machine is gone. Throwaway projects, fine. Real work, no.
A third option exists with the Permission Hook: intelligent delegation. Claude moves without being interrupted. The truly dangerous commands get caught at the door. Anything in the middle is forwarded to a quick LLM that has the context.
Stop configuring. Start building.
SaaS builder templates with AI orchestration.
Stop configuring. Start building.
SaaS builder templates with AI orchestration.
Each request gets evaluated the moment it arrives.
Tier 1 - Fast Approve (No AI Needed)
Safe tools pass straight through:
Zero latency. Zero cost. Claude keeps moving.
Tier 2 - Fast Deny (No AI Needed)
Dangerous operations die on the spot:
# These never execute, period
rm -rf / # System destruction
git push --force origin main # Protected branch overwrite
mkfs /dev/sda # Disk formatting
:(){ :|:& };: # Fork bombNo LLM call. Hard-coded rules stand between you and the worst-case command.
Tier 3 - LLM Analysis (Cached)
Anything ambiguous routes to a small, cheap model (GPT-4o-mini via OpenRouter) that decides with the surrounding context in mind:
{
"tool": "Bash",
"command": "docker system prune -af",
"working_directory": "/home/user/project",
"recent_context": "User asked to clean up Docker resources"
}The model reads what you were trying to do and decides accordingly. Each ruling is cached, so the same command later is instant.
Settings live at ~/.claude-code-fast-permission-hook/config.json:
{
"llm": {
"provider": "openai",
"model": "openai/gpt-4o-mini",
"apiKey": "sk-or-v1-your-key",
"baseUrl": "https://openrouter.ai/api/v1"
},
"cache": {
"enabled": true,
"ttlHours": 168
}
}OpenRouter wins on latency, so it's the default. Grab a key at openrouter.ai. Rough cost: $1 per 5,000+ LLM decisions. Most calls land in Tier 1 or 2 anyway, so a single dollar tends to last for months.
Device Level (recommended): Drop the config into ~/.claude/settings.json once and every project picks it up.
Project Level: Use .claude/settings.local.json when you want rules scoped to one repo.
The installer writes this into your settings:
{
"hooks": {
"PermissionRequest": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "cf-approve permission"
}
]
}
]
}
}Error: "Permission denied" on all operations
Your API key is missing or wrong:
cf-approve configRe-enter the OpenRouter key and you're back.
Error: "Hook not triggering"
Confirm the install is healthy:
cf-approve doctor
cf-approve statusBehavior seems inconsistent
Wipe the decision cache:
cf-approve clear-cacheTwo hooks anchor how ClaudeFast thinks about Claude Code. This one handles permissions. The Skill Activation Hook handles the rest, pulling the right skills into context at the moment they matter.
Run them together and the friction drops out of your day. You talk normally. Claude works without pauses. The orchestration runs underneath, out of sight.
Permission fatigue gone. Skip flags gone. Claude is left to do the part it's good at, which is writing your software while your attention stays on the larger plan.