****## A Developer's Guide to High Quality Code
I build side projects in my spare time. The AI writes the code. I focus on design and architecture.
This sounds reckless. But here's the thing: my code works. Tests pass. Types check. No security vulnerabilities. CI is green.
Not because I'm careful. Because I literally cannot merge broken code. Not by choice. By design. My past self didn't trust my future self. He was right.
The Elegant Metaprogramming Disaster
Last week, I asked AI to fix a bug. It didn't work. I gave it the full traceback. Same error. Third attempt: it rewrote half the module. Fourth: it discovered metaprogramming. On a config file. The worst part? It was elegant. Beautiful useless code that solves no problem. Still the same error.
I was fixing a typo.
That's when I realized: AI doesn't lack power. It lacks a target. It's a workhorse without reins—give it a destination, it'll get there. Give it "go somewhere nice," and it'll run in circles until exhausted. Somewhere, but nowhere useful.
The Simple Fix
So now I write tests first. Because a test is a finish line. Green means done. Red means keep going. AI understands that.
Let me paint the picture. I tell Claude to add a feature. But this time, I write the test first. AI implements. Tests fail. AI tries again. Tests pass. Done.
Did I review those 200 lines one by one? No. I focus on what matters: the design, method signatures, architecture decisions. The tests. The stuff that shapes the codebase long-term. The linter catches the missing await. The type checker catches the wrong return type. That's their job, not mine.
This is programming by coercion. I don't fully trust AI-generated code. So I built a system where the only possible outcome is working code.
The Coercion principle
I don't trust:
- AI's understanding of my codebase
- Anyone's manual review of 500 lines
- "I'll add tests later."
Manual review doesn't scale. Automated checks do.
So I set up the stack and the coercion pipe:
The Unremarkable Stack
Python. FastAPI. TypeScript. React. PostgreSQL.
Not because they're exciting. Because AI knows them cold. Millions of training examples. Fewer hallucinations. Better suggestions.
→ Why This Stack — Boring is a feature.
The Coercion Pipeline Saga
Here's what keeps the whole thing from falling apart:
The Pipeline — Each CI stage, what it catches, why it matters
- Quality: Backend — Ruff, MyPy, async footguns
- Quality: Frontend — ESLint, TypeScript strict, Storybook
- Security — CI-Embedded Security - Check before you wreck
- Tests: Backend — The Behavior Gate
- Tests: Frontend — Vitest, MSW, testing without flakiness
- The Python–TypeScript Contract — OpenAPI + Orval, the most important stage
- E2E — Playwright, because unit tests lie
- [Performance] -coming soon- — k6, Lighthouse, catching slowdowns early
The Workflow — How human and AI actually collaborate
- [TDD as AI Control Loop] -coming soon- — Let the machine iterate
- [The Art of Prompting] -coming soon- — How to ask so AI delivers
- [Context for AI] -coming soon- — CLAUDE.md and why documentation matters again
- [The Human Filter] -coming soon- — Code review pitfalls and what actually matters
The result: a pipeline where bad code physically cannot reach main. Not "shouldn't." Cannot.
The Uncomfortable Truth
AI is here. It's not going away. It writes code faster than you. That's a fact.
It also hallucinates, forgets context, and confidently breaks things. That's also a fact.
You can fight it, ignore it, or learn to work with it. I chose option three.
I define what success looks like. AI figures out how to get there. I bring the intent, AI brings the execution. AI has the horsepower. The pipeline keeps it on the road, channeling all that power in the right direction.
You can review every line the AI writes. Or you can build a system where it doesn't matter if you miss something.
I built the system. Now I spend my time where it matters.
Next up: Boring Is a Feature — Choosing the weapons that AI actually knows how to use.
Top comments (0)