DEV Community

Saad Nasir
Saad Nasir

Posted on

I Built a "Safety Belt" for AI Code Generation. Here's Why

I Built a "Safety Belt" for AI Code Generation

AI coding tools are incredible. They're also terrifying.

Last month, I asked Cursor to add a simple caching layer to my API. It generated 200 lines of code, imported three new libraries, and refactored two functions I didn't ask it to touch.

It worked. But I had no idea why it chose Redis over Memcached. Or why it rewrote my error handler.

I stared at the diff and realized: I didn't fully understand my own codebase anymore.

That's when I built Verif.ai.


The Problem Nobody's Talking About

We're entering the era of "vibe coding"—telling an AI what we want and accepting whatever it spits out as long as tests pass.

Here's what's happening under the surface:

Symptom What It Really Means
"Why did it use that library?" You're accumulating dependency debt you don't understand
"Who approved this change?" No one. The AI just did it
"Can we prove this is compliant?" No audit trail exists
"I don't remember writing this" You didn't. And neither did anyone else

This is comprehension debt. Code that works but nobody fully understands.


What Verif.ai Does

Verif.ai does three simple things:

1. It Pauses the AI
Before AI-generated code touches your files, Verif.ai intercepts and says: "Hold on. Explain yourself first."

2. It Demands a Case File
The AI must document:

  • What it's about to do
  • Why it chose that approach
  • What alternatives it considered
  • Where it got its information

3. It Waits for Human Approval
You review. You approve. Only then does code land.

*Every approval is cryptographically signed. You get a tamper-proof audit trail.
*

Check out the repo, try the quickstart, and tear it apart in the issues.

GitHub: https://github.com/saadnasirajk5-tech/Verif

Even a star helps more than you know. It tells me I'm not crazy for caring about this stuff.

Top comments (0)