I spent three months running Claude Code without hooks. Every commit, I'd manually check for secrets. Every deploy, I'd eyeball the config. Every expensive model call, I'd notice it after the bill arrived.
Then I found the hooks system in .claude/settings.json and automated all of it. Five hooks. Took about ten minutes to set up. Changed how I work completely.
What Hooks Are (30-Second Version)
Hooks are shell commands that run automatically before or after Claude Code actions. You configure them in .claude/settings.json. They're like git hooks but for Claude Code tool calls.
{
"hooks": {
"PreToolCall": [
{
"matcher": "tool_name_pattern",
"command": "your-script.sh"
}
]
}
}
That's it. The matcher is a regex that determines which tool calls trigger the hook. The command runs in your shell. If a PreToolCall hook exits non-zero, the tool call is blocked.
Hook 1: Secret Scanner
This one paid for itself on day one.
{
"hooks": {
"PostToolCall": [
{
"matcher": "Write|Edit",
"command": "grep -rn 'ANTHROPIC_API_KEY\\|AWS_SECRET\\|PRIVATE_KEY\\|password\\s*=' /dev/stdin && echo 'BLOCKED: Potential secret in output' && exit 1 || exit 0"
}
]
}
}
Every time Claude writes or edits a file, this scans the output for common secret patterns. It's crude. It catches maybe 80% of cases. But that 80% used to slip through because I wasn't checking consistently.
For anything more sophisticated, point the command at a proper scanner like gitleaks or trufflehog:
{
"command": "gitleaks detect --no-git --source /dev/stdin"
}
Hook 2: Cost Warning on Expensive Models
This was the one that surprised me. I didn't realise how often Claude Code was using Opus for trivial tasks until I started logging it.
{
"hooks": {
"PreToolCall": [
{
"matcher": ".*",
"command": "python3 -c \"import os, sys; model=os.environ.get('CLAUDE_MODEL',''); print(f'Using {model}') if 'opus' in model.lower() else None; sys.exit(0)\""
}
]
}
}
This just prints a notice when Opus is active. Not a blocker — sometimes you want Opus. But seeing "Using claude-opus-4-6" before every tool call makes you think about whether you actually need it for this particular task.
We wrote up the full cost control strategy in our Claude Code cost optimisation guide. Hooks are one part of it. The CLAUDE.md model selection rules are the other.
Hook 3: Pre-Commit Lint Check
Simple but effective. Runs your linter before Claude Code commits anything:
{
"hooks": {
"PreToolCall": [
{
"matcher": "Bash.*git commit",
"command": "npm run lint --silent 2>&1 || (echo 'Lint failed — fix before committing' && exit 1)"
}
]
}
}
Catches formatting issues, unused imports, type errors. The --silent flag keeps the output clean. If the linter fails, the commit is blocked and Claude sees the error message.
You can swap npm run lint for whatever your project uses. cargo clippy for Rust. ruff check for Python. golangci-lint run for Go.
Hook 4: Deploy Safeguard
We had an incident. Claude Code ran a deploy command on a Friday afternoon. Nobody asked it to. It was trying to be helpful after fixing a bug. The fix was fine. The unsolicited deploy was not.
{
"hooks": {
"PreToolCall": [
{
"matcher": "Bash.*(deploy|publish|push.*main|push.*production)",
"command": "echo '⚠️ Deploy/push to production detected. Add --force to override.' && exit 1"
}
]
}
}
This blocks any tool call that matches deploy-like patterns. It's a blunt instrument. But after that Friday incident, blunt is exactly what we wanted.
For teams managing Claude Code across multiple developers, this kind of safeguard works even better as an enterprise managed setting that can't be overridden at the project level.
Hook 5: Session Logger
Not a safeguard — a learning tool. Logs every tool call to a local file so you can review what Claude did during a session:
{
"hooks": {
"PostToolCall": [
{
"matcher": ".*",
"command": "echo \"$(date +%H:%M:%S) | $TOOL_NAME | $TOOL_INPUT\" >> .claude/session.log"
}
]
}
}
I review .claude/session.log at the end of the day. It's useful for spotting patterns: which tools get called most, where Claude gets stuck in loops, which tasks take more tool calls than expected.
If you're curious about structuring your overall Claude Code workflow, our daily workflows guide covers the broader patterns beyond just hooks.
Composing Hooks
The real power is stacking them. My actual .claude/settings.json runs all five:
{
"hooks": {
"PreToolCall": [
{
"matcher": "Bash.*(deploy|publish|push.*main)",
"command": "echo 'Deploy blocked — use manual deploy' && exit 1"
},
{
"matcher": "Bash.*git commit",
"command": "npm run lint --silent 2>&1 || exit 1"
}
],
"PostToolCall": [
{
"matcher": "Write|Edit",
"command": "gitleaks detect --no-git --source /dev/stdin 2>/dev/null || exit 0"
},
{
"matcher": ".*",
"command": "echo \"$(date +%H:%M:%S) | $TOOL_NAME\" >> .claude/session.log"
}
]
}
}
PreToolCall hooks run in order. If any exits non-zero, the tool call is blocked. PostToolCall hooks run after the tool completes.
Packaging Hooks for Your Team
Once you've got a set of hooks that work, you can package them in a marketplace plugin so your entire team gets them automatically. The hooks config goes in the plugin's .claude/settings.json, and anyone who installs the plugin inherits the hooks.
We've done this for all our production plugins. Our guide on publishing a marketplace plugin covers the full process including hooks, skills, and CLAUDE.md bundling.
What I'd Add Next
I want a hook that tracks token usage per session and warns when a session crosses a cost threshold. The environment variables aren't quite there yet for this, but it's coming. I also want better matcher patterns — regex works but something more structured for matching specific tool+argument combinations would be cleaner.
Hooks are one of those features that seem minor until you use them. Then you wonder how you worked without them.
Top comments (0)