DEV Community

Aamer Mihaysi
Aamer Mihaysi

Posted on

The Security Tool You Build After Using AI to Find Bugs for Months

The Security Tool You Build After Using AI to Find Bugs for Months

Simon Willison just released scan-for-secrets, and the story behind it is more interesting than the tool itself.

He built it because he needed it. Not for a research project or a blog post, but because he'd been using coding agents to find security vulnerabilities and wanted a better workflow.

This is the pattern that matters: practitioners building tools for their own AI-assisted work.


What scan-for-secrets Does

At its core, it's straightforward:

  • Scan directories for potential secrets
  • Flag API keys, tokens, passwords, certificates
  • Output findings in structured formats
  • Integrate with git hooks and CI pipelines

You've seen similar tools. TruffleHog, GitLeaks, detect-secrets. The difference isn't in what it finds—it's in why it exists.


The Backstory That Matters

Simon Willison has been writing about AI-assisted security for months.

He covered the Axios supply chain attack—individually targeted social engineering. He quoted security researchers on how AI changes vulnerability discovery. He talked about cognitive debt from coding agents.

Then he built scan-for-secrets.

Not because existing tools were bad, but because his workflow changed.

When you're using AI to write code, review code, and find bugs, the surface area for accidental secret exposure increases:

  • Agents paste examples into context
  • Agents reference credentials in generated code
  • Agents leave debugging artifacts
  • The pace of code generation outpaces manual review

The old tools assumed human-written code. The new reality is AI-generated code at scale.


Why This Pattern Will Repeat

Every practitioner using AI for real work eventually hits friction points.

The first generation of AI tools were built by AI companies for general users. They solved broad problems: generate text, answer questions, write code.

The second generation is being built by practitioners for practitioners.

They solve specific problems:

  • How do I secure AI-generated code?
  • How do I evaluate agent outputs at scale?
  • How do I track what my agents are doing?
  • How do I prevent agents from leaking sensitive data?

These tools don't come from AI labs. They come from people doing the work.


The Security Tooling Shift

Traditional security tools assumed:

  • Humans write code slowly
  • Code review catches most issues
  • Secrets are rare and precious
  • Attack surfaces are relatively static

AI-assisted development changes every assumption:

  • Code is generated at machine speed
  • Review can't keep up with generation
  • Secrets appear in generated examples, debugging output, and context
  • Attack surfaces expand with every agent-added dependency

The tools built for human-paced development don't work for AI-paced development.

This is why you're seeing:

  • AI-specific secret scanners
  • Agent output evaluation frameworks
  • Prompt injection detection tools
  • AI-assisted code review systems

What scan-for-secrets Gets Right

The tool itself is well-designed for the AI-assisted workflow:

  • Fast enough for CI — runs on every commit without slowing pipelines
  • Structured output — machine-readable for downstream processing
  • Git-aware — understands what changed between commits
  • Extensible — add new patterns without rewriting core logic
  • Composable — pipes into other tools and workflows

But the design decisions came from using it. The features came from friction.


The Meta-Pattern

This is what the AI tooling landscape looks like when practitioners lead:

  1. Use AI for real work — not demos, not experiments, production work

  2. Identify friction — where does AI create new problems or amplify old ones?

  3. Build targeted solutions — small tools that solve specific pain points

  4. Open source them — the problems aren't unique, others have them too

  5. Iterate in public — the feedback loop improves the tool for everyone

This is different from the AI lab approach:

  • Build a platform
  • Add every feature
  • Release as a finished product
  • Hope users discover use cases

The practitioner approach:

  • Solve your own problem
  • Share the solution
  • Let others extend it

What Comes Next

Expect more of this.

Every developer using coding agents for months has accumulated a set of scripts, workflows, and tools that make their life easier.

Most of these stay private. Some get open-sourced.

The ones that get shared become the foundation for the next layer of tooling.

We're not waiting for AI companies to build the AI-assisted development stack. We're watching practitioners build it themselves, one friction point at a time.


The Takeaway

scan-for-secrets isn't revolutionary. It's a good tool for a specific job.

But it's also evidence of a shift:

The people using AI every day are the ones building the tools that make AI usable.

Not AI researchers. Not product managers. Practitioners.

They're not trying to replace developers. They're trying to make their own work sustainable.

The tools they build in the process become infrastructure for everyone else.


If you're using coding agents and haven't hit a friction point that needed a new tool, you're not using them enough. The interesting work happens in the friction.

Top comments (0)