DEV Community

thesss ai
thesss ai

Posted on

We reduced bug rates 91% by using AI to enforce schema normalization

We reduced our bug rate from ~23 per 100 lines of code to just 2 per 100 lines on a full platform rebuild.

We achieved this by using AI to force-generate strictly normalized database schemas and typed interfaces from day one—essentially using AI to enforce the discipline we usually skip in early-stage development.

Here’s the trap we fell into (and I suspect many of you have too):

AI-Generated Architecture vs Legacy Technical Debt Concept

In Month 1, storing data as a JSON blob in Postgres feels like a cheat code. You move fast, you iterate, you don't worry about migrations.

But by Year 2, that "temporary" schema shortcut had become a hard dependency for 47 different functions. Our velocity tanked because every minor feature request required untangling a web of implicit dependencies. We call this the "Architecture Decision Cascade"—where a lazy decision in week 4 paralyses you in week 100.

The Architecture Decision Cascade Dependency Graph
We tried to fix this the traditional way. We paused feature development to "modernise the infrastructure." It was a disaster. We burned about $4.8M in opportunity cost and ended up with a codebase that was just as fragile, only newer.

So for the rebuild, we tried something counterintuitive. Instead of asking our devs to be more disciplined (which never works when deadlines loom), we used AI to generate the entire architectural skeleton upfront.

The Technical Debt Cost Iceberg Visualization
We fed the requirements to an LLM workflow that outputted fully normalized SQL schemas, strict TypeScript interfaces, and the service layer boilerplate. No JSON blobs. No "we'll clean this up later." The AI forced us to use Year 3 patterns in Month 1.

The AI didn't write the business logic—that’s still us. It just generated the rigid, verbose scaffolding that prevents us from taking shortcuts.

The results after 12 months compared to our previous baseline:

Velocity increased 340% (measured by story points shipped per sprint, though I know points are a fuzzy metric).

Bug rate dropped 91% (23 bugs/100 lines → 2 bugs/100 lines).

Churn reduced by 67%.

Basically, the AI acted as a relentless architect that doesn't get tired of writing boilerplate. It made doing the "right thing" cheaper than doing the "fast thing."

Development Velocity Comparison Manual vs AI
But I want to be honest about the trade-offs, because this isn't a silver bullet:

It feels incredibly slow at first. Writing a prompt to generate a normalized schema for a simple feature feels like overkill compared to data: json. You lose that initial "hacker" speed.

The generated code is verbose. We have a lot of boilerplate files now.

Edge cases are a pain. The AI is great at standard CRUD patterns, but the moment we have weird domain-specific constraints, we often have to fight the generator or rewrite the scaffolding manually.

My big worry is the long term. Right now, we're seeing massive gains because the foundation is solid. But I'm concerned about the "black box" effect.

Discussion questions for the group:

Has anyone successfully maintained a large AI-scaffolded codebase for 3+ years?

I worry that as the team rotates, we'll lose the understanding of the underlying patterns because nobody actually wrote them. Also, does this approach kill creativity?

I feel like we've traded the fun of "solving the puzzle" for the efficiency of "filling in the blanks." It works for the business, but I'm not sure if it burns out engineers faster.

Curious to hear if others are using AI for architectural enforcement rather than just code completion.

Top comments (0)