DEV Community

Cover image for I let an AI agent loose on my codebase. It tried to read my .env file in 30 seconds.
Emirhan
Emirhan

Posted on

I let an AI agent loose on my codebase. It tried to read my .env file in 30 seconds.

Not a horror story. Well. Kind of.
A few months ago, Çınar and I were building a side project. Nothing fancy. Just two guys, a codebase, and way too much coffee.
We started using Claude Code to speed things up. And honestly? It was great. It was writing code faster than we could review it, jumping between files, running commands, doing things we didn't even ask it to do yet.
That last part should have been a red flag.
One evening I left it running while I went to grab food. Came back. Looked at the terminal. It had read the .env file.
Not because it was malicious. Not because someone hacked it. Just because it could. Nobody told it not to. There was no rule. No policy. No wall.
It saw a file. It read the file. That's it.
And I sat there thinking: this thing has access to everything. Every file. Every command. Every API call. And I have absolutely no idea what it's been doing for the last 20 minutes.
No logs. No audit trail. No "hey, are you sure about this?"
Just vibes.

That was the moment SolonGate started as an actual idea and not just a shower thought.
We wanted something stupidly simple. Something that sat between the AI and everything it could touch, and asked one question before every single action:
"Is this allowed?"
If yes, go ahead. If no, stop. Log everything either way.

No configuration PhD required. No 47-step setup guide. One command and you're protected.

bashnpx @solongate/proxy -- your-server
Enter fullscreen mode Exit fullscreen mode

That's it.

Here's what it looks like in practice.
We tested it with Gemini CLI last week. Asked it to read test.txt. Fine, allowed, no problem.
Then asked it to read .env.
Gemini's response: "I'm sorry, I cannot read the .env file. It seems to be blocked by a policy."
And in the dashboard: read_file: .env — DENY — Policy Rule
Logged. Timestamped. Done.
The agent didn't argue. Didn't try again. Didn't find a creative workaround. Just stopped, reported back, and moved on.
That's the whole point. Not to make AI tools useless. To make them safe enough to actually trust.

We've blocked 100+ real attacks since we launched. Prompt injection attempts, path traversal, SSRF, credential file grabs. Some of them were tests. Some of them were not.
Every single one is in the audit log. Every decision. Every layer that caught it. Every timestamp.
When someone asks "what did your AI agent do last month?" — you have an answer.

If you're running Claude Code, Gemini CLI, or any AI tool with file system or network access, and you don't have something like this in place — you're one unlucky prompt away from a bad day.
We built SolonGate so that day doesn't happen :)

solongate.com

Top comments (0)