DEV Community

Cover image for Prompt Poaching: Why I Built Secret Sanitizer
Souvik Ghosh
Souvik Ghosh

Posted on

Prompt Poaching: Why I Built Secret Sanitizer

Last year, I pasted a chunk of terminal output into ChatGPT to debug a failing deploy. Helpful answer. Great experience. Then I noticed my AWS keys sitting right there in the prompt — logged on someone else's servers, probably forever.

I rotated them immediately. Nothing happened. But it stuck with me.

Then in late 2025, security researchers discovered something worse: Chrome extensions with millions of users were silently harvesting every AI conversation and selling the data to brokers. Extensions with Google's "Featured" badge. Extensions marketed as privacy tools.

They called it Prompt Poaching — and nearly 9 million users were affected.

That's when I realized the problem is two layers deep. It's not just about what you send to the AI provider. It's also about what your browser extensions can see before it even gets there.

I needed something that sat between my clipboard and the chat input. So I built it.


Meet Secret Sanitizer

An open-source Chrome extension that masks secrets before they reach any AI chat.

Secret Sanitizer demo

The idea is simple:

You copy:     DATABASE_URL=postgres://admin:s3cret@prod.internal:5432/app
You paste:    DATABASE_URL=[MASKED]
Enter fullscreen mode Exit fullscreen mode

When you paste into ChatGPT, Claude, Gemini, Grok, Perplexity, DeepSeek — or any custom site you add — the extension intercepts the paste, runs regex patterns locally in your browser, replaces detected secrets with [MASKED], and shows a quick toast confirming what was blocked.

The AI still gets your question. It just doesn't get your credentials.

Originals are stored in a local encrypted vault you can unmask anytime.


What it catches

API keys (AWS, GCP, Azure, Stripe, GitHub, OpenAI, and many more), passwords, bearer tokens, JWTs, database connection strings, private key blocks, .env key-value pairs, and even Indian PII like Aadhaar and PAN numbers.

Every pattern can be toggled on or off individually — no false-positive headaches.


Why you should trust it

After writing about extensions that betray trust, I'd be a hypocrite asking for blind trust. So every design decision optimizes for verifiability:

  • 100% local — no fetch(), no XMLHttpRequest, no network calls. Verify yourself: grep -r "fetch\|XMLHttpRequest" content_script.js
  • Works offline — disable Wi-Fi and try it
  • 38 KB total — there's nowhere to hide malicious code in 38 KB
  • Open source — MIT licensed. Read every line

Other features

  • Test Mode — preview what gets masked without modifying your paste
  • Stats dashboard — track secrets blocked, see which patterns fire most
  • Custom sites — protect any domain with one click
  • Backup and restore — export/import your config
  • Dark mode and keyboard shortcuts

Try it

Chrome Web Storeone-click install

GitHubsource code, issues, contributions welcome


What's next

Firefox support, smart restore (auto-restore secrets when copying AI responses), and community pattern packs are all on the roadmap.

If you try it, I'd love to hear — what patterns am I missing? Any false positives? Would you use a Firefox version?

Drop a comment or open an issue. And if it saves you from a leak, a ⭐ on GitHub helps other devs find it.

Paste safely out there 💚.

Top comments (0)