I built a 15,000-line game with Claude Code while sleeping. Here are the 4 hooks that made it possible.
The Problem
Claude Code is powerful, but it stops every few minutes to ask "should I continue?" or "which approach?" In unattended sessions, that means your AI sits idle until you come back.
The Solution
Claude Code Ops Starter — 4 production-tested bash hooks that enforce autonomous decision-making.
What's Inside
1. Context Monitor (context-monitor.sh)
Counts tool calls as a proxy for context window usage. Warns at 3 thresholds:
- Soft (80): "Consider deferring large tasks"
- Hard (120): "Wrap up and hand off"
- Critical (150): Auto-generates checkpoint file for session handoff
# Configurable thresholds
SOFT_WARNING=80
HARD_WARNING=120
CRITICAL=150
2. No-Ask-Human (no-ask-human.sh)
Blocks all AskUserQuestion tool calls. Instead of stopping to ask, the AI:
- Decides on its own
- Logs uncertainty to
~/pending_for_human.md - Moves to the next task
Override with CC_ALLOW_QUESTIONS=1 when you want questions back.
3. Syntax Check (syntax-check.sh)
Auto-runs verification after every file edit:
- Python:
python -m py_compile - Shell:
bash -n - JSON:
jq empty
Catches errors immediately instead of 50 tool calls later. Gracefully skips if jq or python aren't installed.
4. Decision Warn (decision-warn.sh)
Flags rm -rf, git reset --hard, DROP TABLE before execution. Warns but doesn't block — uncomment one line for hard blocking.
Install
git clone https://github.com/yurukusa/claude-code-ops-starter.git
cd claude-code-ops-starter
bash install.sh
No dependencies besides bash. jq and python are optional (hooks skip gracefully without them).
How It Works
Claude Code has a hooks system that runs shell scripts at specific lifecycle points:
- PreToolUse: Runs before a tool call (can block it with exit 1)
- PostToolUse: Runs after a tool call (for monitoring and validation)
The starter kit uses both:
| Hook | Trigger | Effect |
|---|---|---|
| no-ask-human.sh | PreToolUse (AskUserQuestion) | Blocks with exit 1 |
| decision-warn.sh | PreToolUse (Bash) | Warns (exit 0) |
| syntax-check.sh | PostToolUse (Edit/Write) | Validates (exit 0) |
| context-monitor.sh | PostToolUse (all) | Monitors (exit 0) |
Background
These hooks come from 200+ hours of real autonomous Claude Code operation. Every hook solved a real problem:
- The context monitor was born from sessions that ran out of context without warning
- No-ask-human from overnight sessions where the AI sat idle for hours waiting for a human answer
- Syntax check from discovering errors 50 tool calls after they were introduced
- Decision warn from a
git reset --hardthat wiped an afternoon of work
These 4 hooks are the foundation. The full system is the house. 15 minutes to set up what took us 200 hours to build — the CC-Codex Ops Kit includes multi-agent orchestration, stall detection, watchdog, task queue, and 20+ hooks.
Before your next overnight run: Scan your Claude Code setup for free — 30 seconds, nothing installed. See if your 4 hooks cover the critical gaps.
Free Tools for Claude Code Operators
| Tool | What it does |
|---|---|
| cc-health-check | 20-check setup diagnostic (CLI + web) |
| cc-session-stats | Usage analytics from session data |
| cc-audit-log | Human-readable audit trail |
| cc-cost-check | Cost per commit calculator |
Interactive: Are You Ready for an AI Agent? — 10-question readiness quiz | 50 Days of AI — the raw data
Top comments (0)