80% of documented breaches in 2024 involved exposed credentials. Not zero-days. Not sophisticated exploits. Credentials. Plaintext, in repos, in committed environment files, in Slack. When I read that I had to read it twice — because I had just reviewed three PRs with exactly that problem.
And the most uncomfortable part wasn't finding them. It was realizing all three had already gone through review before they landed on my desk.
Security as Proof of Work: The Compliance Theater Trap
Something shifted in the last few years and I couldn't quite name it until recently. Security became a signaling activity. Not in the sense that it's fake — in the sense that visible work started crowding out effective work.
We've got badges. We've got dashboards. We've got Dependabot firing automated PRs, Snyk scanning every push, SBOM generated in the pipeline, SPF and DKIM configured in DNS, DMARC in reject mode, secrets in Vault, automatic token rotation. All of that exists. All of that is real and it cost weeks of work.
And someone still committed API_KEY=sk-prod-abc123... in a .env file that someone else decided not to add to .gitignore because "it's just the internal repo."
That's what I call security as proof of work. You do the work. You demonstrate the work. The work doesn't protect you from what nobody measured.
The Real Problem: The Visibility Asymmetry
When you configure DMARC, there's a DNS change. It's auditable. It shows up in logs. You can point at a screen and say "I did this." When you build a culture where nobody commits secrets because everyone genuinely understands why — that doesn't show up in any dashboard.
The invisible work of security is:
- The conversation you had with the junior dev explaining what a secret is and why it matters
- The onboarding process you designed so that day one includes setting up the team password manager
- The PR you rejected and the comment you wrote explaining the reasoning, not just the error
- The decision to not give write access to the production repo even though it would've been "more convenient"
- The moment you said "let's move this to environment variables" before anyone asked
None of that shows up in the compliance report. None of it scores points in the audit. But it's exactly what's missing when a secret lands in main.
Why Tooling Alone Isn't Enough
I ran an exercise this week. I went back through the project's git history to check how many times gitleaks or detect-secrets had actually blocked something before it reached review.
The answer was zero. Not because they weren't configured — they were. But because the secret was in a file the hook wasn't scanning, because someone had added it to the exceptions list six months ago for a reason nobody remembers.
# .gitleaks.toml I inherited
[allowlist]
description = "Files excluded from scanning"
paths = [
# ⚠️ This was added in July and nobody knows why
'''(?i)(\.env\.example|\.env\.local|config/secrets)''',
# ⚠️ This path includes the file where the real secrets actually were
'''(?i)(tests/fixtures)'''
]
That's what happens when tooling gets configured once and nobody looks at it again. It becomes theater. The scanner runs, the badge says green, the secret is in the repo.
The same thing bit me when I was optimizing Docker images — vulnerability scanners on image layers will tell you exactly which CVEs are present, but if your build process copies files that shouldn't be there, the problem isn't the image, it's the Dockerfile. The tool sees what it can see.
The Cyber Café Moment — And What I Actually Learned
In 2005 I was 14 years old and working at a cyber café. When the connection dropped at 11pm with a full house, I had to fix it. No documentation. No runbook. Nobody to call.
I learned networking by brute force on those nights. But more than networking, I learned something about security that took me years to articulate: the adversary doesn't warn you when they're coming, and they don't respect your compliance schedule.
The guys exploiting the café machines to mine, to proxy traffic, to leech bandwidth — they weren't waiting for me to have the antivirus updated. They moved when the system was vulnerable, which was usually 2am when I wasn't there.
That's real asymmetry. And twenty years later, it's still the central problem.
What Teams Should Be Measuring (But Aren't)
If I had to design security metrics that actually matter, I wouldn't start with Dependabot. I'd start here:
Time to detection of a secret in the repo — not time to resolution, time until someone notices. If it took you three days to realize there was a key in main, the scanner isn't functioning as a real alert tool.
PR rejection rate for security reasons — if this number is zero, it's not because the team is perfect. It's because nobody is looking.
Real scanner coverage — not "the scanner runs," but what percentage of code that reached main actually passed through the scanner without exceptions. This is different, and the difference is enormous.
Ratio of proactively vs. reactively rotated secrets — if all your secrets get rotated after an incident, your security process is reactive dressed up as proactive.
# What I want to know vs. what dashboards tell me
# Typical dashboard:
visible_metrics = {
"dependabot_prs_merged": 47,
"snyk_vulnerabilities_fixed": 12,
"dmarc_compliance": "100%",
"secrets_vault_rotations": 8
}
# What actually matters and nobody measures:
real_metrics = {
# How long did it take to detect the last exposed secret?
"time_to_detect_last_secret": "3 days",
# What % of code was actually scanned (without exceptions)?
"real_scanner_coverage": "67%", # not 100%
# How many rotations happened BEFORE an incident?
"proactive_vs_reactive_rotations": "2 out of 8",
# How many PRs were rejected for security reasons this month?
"prs_rejected_for_security": 0 # ← this is a red flag
}
When I'm thinking through automated workflows with Claude Code or how AI agents are already solving things I was reimplementing by hand, I keep landing at the same point: automation amplifies what you already have. If you have a healthy process, automation scales it. If you have security theater, automation scales the theater.
The Gotchas Nobody Documents
Gitleaks with inherited exceptions — go look at the allowlist in your gitleaks or detect-secrets config right now. If it has paths or patterns you can't explain, they're probably covering something they shouldn't be.
Pre-commit hooks that get skipped — git commit --no-verify exists and your team knows about it. The pre-commit hook isn't a barrier, it's a reminder. If the culture isn't there, the hook isn't enough.
Secrets in commit messages — code scanners don't always scan commit history. A secret that landed and got removed the same day can still be accessible in history if the repo is public or if someone cloned before the cleanup.
Environment variables in logs — this one burns me the most. You set everything up perfectly, secrets are in Vault, variables get injected at runtime. Then someone adds a console.log(process.env) to debug something and commits it without thinking. Production logs now have everything.
The .env.example problem — that file exists to document which variables you need. Invariably, at some point, someone edits it with real values "temporarily" and commits it. The .env.example should be reviewed in every PR that touches it.
FAQ: Security as Proof of Work
What's the first thing I should check in an inherited project?
The git history searching for secret patterns, the scanner allowlists, and repo access permissions. In that order. The scanner allowlist is the most ignored and the most dangerous because it creates a false sense of coverage.
So Dependabot and Snyk are useless?
No, they're necessary but not sufficient. They solve the problem of dependencies with known vulnerabilities, which is real. They don't solve hardcoded secrets, bad access practices, or insecure configurations. They're a layer, not a complete solution.
How do you convince a team to take secrets seriously when delivery pressure is high?
Not with security talks. With the first real incident that hits close to home. Before that, the most effective thing I've found is automating the rejection — make CI/CD fail if detect-secrets finds anything, with no possible exceptions without explicit tech lead approval.
What do I do if there are already secrets in git history?
First, assume they're compromised and rotate everything. Second, use git filter-repo to rewrite history (not git filter-branch, which is deprecated). Third, force everyone who cloned the repo to do a fresh clone, because their local copies still have the old history.
Is this a technical problem or a cultural one?
Both, but in different proportions depending on the team. The tooling is the easy part — a day of work and you've got gitleaks, detect-secrets, and pre-commit hooks configured. The hard part is getting everyone on the team to understand why it matters, not just that there's a rule. The difference between those two things is exactly what separates the team that gets the hardcoded secret approved in main from the team that doesn't.
Is it worth getting security certifications (CISSP, CEH, etc.)?
Depends on what for. If you want to do offensive security or work exclusively in security, yes. If you're a developer or architect who wants to be more rigorous about security day-to-day, the time you'd spend deeply understanding the OWASP Top 10 and practicing threat modeling will give you a better return than a certification.
The Work Nobody Sees
Something stayed with me from those nights at the cyber café at 14 that connects to all of this. When you fixed the connection at 11pm and the place came back to life, nobody knew exactly what you'd done. They just knew it worked. The visible work was the outcome, not the process.
Real security works the same way. When it works, nothing happens. Nobody applauds the breach that didn't occur. Nobody celebrates the secret that never made it to main. The proof of work for effective security is, paradoxically, the absence of events.
The problem is that in a world where attention flows to the visible, we end up optimizing for the visible. Badges, dashboards, compliance reports. All of that has value — I'm not throwing it out. But if those tools become the goal instead of the means, we're doing theater.
The hardcoded secret I found this week didn't surprise me because the team is bad. It surprised me because the team has everything visible configured — and it still happened. That's exactly the signal I was looking for.
If you're wondering how much of your security stack is real vs. signaling, the exercise is simple: take the last incident or near-miss you had, and trace exactly at what point in the process it should have been detected and why it wasn't. The answer is almost always an exception nobody remembers adding, or a process that assumed someone else was watching.
The invisible work of security is what matters most. And it's exactly what we spend the least time on.
Top comments (0)