Claude Code is powerful. It has full access to your file system,
your shell, and everything in between.
That's also what makes it dangerous.
The problem nobody is talking about
When you run Claude Code, you're giving an AI agent the ability to:
- Read your
.envfiles and credentials - Run
rm -rfon your project directory - Execute
git push --forcewithout asking -
curlyour files to external servers - Install packages from untrusted sources
- Access your SSH keys, AWS credentials, database configs
None of this requires the AI to be malicious. One hallucination,
one misunderstood instruction, one edge case — and your secrets
are in a log file somewhere.
I discovered this while running AI coding agents on production
intelligence pipelines. Credential leaks aren't theoretical in
that environment. I needed something deterministic, not advisory.
So I built OpSentry.
How it works — three enforcement layers
The core insight is defense-in-depth. No single layer is enough.
Layer 1 — Behavioral rules (CLAUDE.md)
18 security rules Claude Code reads at session start. Covers
sensitive files, credentials, SQL safety, XSS, PII, and scope
boundaries. Advisory only — the AI might still attempt violations,
which is why we need layers 2 and 3.
Layer 2 — Permission denials (settings.json)
70+ hard deny rules built into Claude Code's permission system.
Blocks file access and command execution at the platform level
before anything else runs. The AI cannot override these.
Layer 3 — Hook scripts (deterministic enforcement)
8 bash scripts that intercept every single tool call. They inspect
the command or file path using regex pattern matching and block
with exit code 2 if it matches a dangerous pattern. Full incident
logging to ~/.claude/guardrail-blocks.log.
Even if the AI ignores the behavioral rules, the deterministic
layers will block prohibited actions.
What gets blocked
| Category | Examples |
|---|---|
| Sensitive files |
.env, credentials, SSL certs, SSH keys, cloud configs |
| Dangerous commands |
rm -rf, sudo, chmod 777, DROP TABLE, pipe-to-shell |
| Git operations | All git commands — agent writes them as text, you run them |
| Data exfiltration |
curl/wget uploads, base64 of secrets, netcat channels |
| Untrusted packages |
pip/npm from git URLs, custom registries |
| Environment escape |
ssh, docker run/exec, terraform apply/destroy
|
| PII in code | SSNs, credit card numbers, Korean RRNs |
Why not just use Claude Code's built-in permissions?
The built-in permission system (Layer 2) is part of what we use —
but it's glob-pattern only, no context awareness.
The hook scripts add what the permission system can't do:
context-aware regex inspection. So we can block base64 .env
but allow base64 image.png. That contextual blocking is what
the native deny rules miss.
Install in 2 minutes
git clone https://github.com/opsight-intelligence/opsentry
cd opsentry
./install.sh
Restart Claude Code. That's it.
Prerequisites: jq must be installed.
- macOS:
brew install jq - Ubuntu/Debian:
sudo apt install jq
Verify it's working
./verify.sh
Checks all hooks are present, unmodified, executable, and
registered correctly.
88 automated tests
Every hook has both blocked and allowed test cases:
./test.sh
Because security tooling without tests is just hope.
What's next
The community edition (free, Apache 2.0) covers the developer
workstation. The Pro tier adds CI/CD agents that run on every
Pull Request — scanning, auto-fixing, and blocking merges when
critical issues are found.
Built by Opsight Intelligence.
Repo: github.com/opsight-intelligence/opsentry
Questions, issues, feedback welcome — open an issue or
comment below.
Update: Renamed from AgentGuard to OpSentry —
cleaner name, no conflicts with existing projects.
Same codebase, same architecture, same install.
Top comments (2)
The context-aware regex insight — blocking
base64 .envbut allowingbase64 image.png— is the thing most people miss when they start with deny lists. Glob patterns are a blunt instrument. Context-aware regex is what actually makes enforcement useful rather than just annoying.We ran into a different but related attack surface when building axiom-perception-mcp — it exposes Accessibility (AX) tools that can interact with any running app's UI, which is orthogonal to shell access. Our solution was a hardcoded blocklist of bundle ID prefixes (
com.agilebits,com.apple.keychainaccess,com.apple.passwords) and app name keywords that gets checked before any AX operation, regardless of what the OS-level permission grants. Same philosophy as your Layer 3: deterministic enforcement that can't be talked out of by the model."Security tooling without tests is just hope." Stealing that line.
Thanks and sorry, I just saw your comment.
The axiom-perception-mcp example is a perfect
parallel. The AX attack surface is something most people
completely overlook because it's not file system or shell
access, but the blast radius is just as serious.
The bundle ID prefix blocklist approach is exactly right —
deterministic, checked before any operation, no model
discretion. That's the pattern that actually holds.
Curious how you handle the edge cases where legitimate
accessibility tooling (screen readers, automation frameworks)
shares prefix patterns with the blocked ones — is it purely
additive allowlist on top of the blocklist, or do you gate
on something else?
Going to check out axiom-perception-mcp — this is exactly
the kind of adjacent work I was hoping would surface from
this post.