DEV Community

Cover image for Programming by Coercion
nicolas.vbgh
nicolas.vbgh

Posted on

Programming by Coercion

A Lazy Developer's Guide to High Quality Code

I build side projects while watching Netflix. The AI writes the code. I glance at it between scenes.

This sounds irresponsible. It probably is. But here's the thing: my code works. Tests pass. Types check. No security vulnerabilities. CI is green.

Not because I'm careful. Because I literally cannot merge broken code. Not by choice. By design. My past self didn't trust my future self. He was right.


The Elegant Metaprogramming Disaster

Last week, I asked AI to fix a bug. It didn't work. I gave it the full traceback. Same error. Third attempt: it rewrote half the module. Fourth: it discovered metaprogramming. On a config file. The worst part? It was elegant. Beautiful useless code that solves no problem. Still the same error.

I was fixing a typo.

That's when I realized: AI doesn't lack power. It lacks a target. It's a workhorse without reins—give it a destination, it'll get there. Give it "go somewhere nice," and it'll run in circles until exhausted. Somewhere, but nowhere useful.


The Simple Fix

So now I write tests first. Not because I'm disciplined—I'm not. But because a test is a finish line. Green means done. Red means keep going. AI understands that.

Let me paint the picture. It's 9 PM. I'm half-watching some series. I tell Claude to add a feature. But this time, I write the test first. AI implements. Tests fail. AI tries again. Tests pass. Done.

Did I review those 200 lines one by one? No. I focus on what matters: the design, method signatures, architecture decisions. The tests. The stuff that shapes the codebase long-term. The linter catches the missing await. The type checker catches the wrong return type. That's their job, not mine.

This is programming by coercion. I don't trust myself at 9 PM. I don't trust the AI either. So I built a system where the only possible outcome is working code.


The Coercion principle

I don't trust:

  • My attention span after 6 PM
  • AI's understanding of my codebase
  • Anyone's manual review of 500 lines
  • "I'll add tests later"

Basically, I don't trust anything that involves humans. Especially after lunch.

So I set up coercion:

Type checking — Can't merge if types don't match. I use strict mode because I know I'll forget.

Linting with teeth — Ruff catches async mistakes I make every single time. Not "warnings." Errors. Pipeline fails.

Contract testing — Backend changes API? Frontend types auto-update. Mismatch? CI blocks.

Tests as specs — I write the test first. AI implements until it passes. No ambiguity, no "it works on my machine."

The result: a pipeline where bad code physically cannot reach main. Not "shouldn't." Cannot.


The Part Where I Watch TV

So what does a typical evening look like? I write a test. I tell AI what I want. Then I go back to my show.

sequenceDiagram
    participant Me
    participant AI
    participant CI

    Me->>AI: Describe feature and tests (5 min)
    loop Until green
        AI->>AI: Implement
        AI->>AI: Run tests
        AI->>AI: Fix failures
    end
    AI->>Me: Ready for review
    Me->>CI: Review MR (5 min)
    CI->>CI: Validate everything
    CI->>Me: Merged!
Enter fullscreen mode Exit fullscreen mode

I define what should happen (the test). AI figures out how. CI verifies everything.

The AI loops until tests pass. Sometimes it takes 3 iterations. Sometimes 10. I don't care. I'm watching my show.


The Unremarkable Stack

Python. FastAPI. TypeScript. React. PostgreSQL.

Not because they're exciting. Because AI knows them cold. Millions of training examples. Fewer hallucinations. Better suggestions.

Boring is a feature.

Why This Stack


The Pipeline Saga

Here's what keeps the whole thing from falling apart:

The Pipeline — Each CI stage, what it catches, why it matters

  • Quality: Backend — Ruff, MyPy, async footguns
  • [Quality: Frontend] -will be released soon- — ESLint, TypeScript strict, Storybook
  • [Security]-will be released soon- — Trivy and Semgrep, scanning dependencies and your code
  • [Tests: Backend] -will be released soon- — pytest, fixtures, why mocking matters
  • [Tests: Frontend] -will be released soon- — Vitest, MSW, testing without flakiness
  • [Contract Testing] -will be released soon- — OpenAPI + Orval, the most important stage
  • [E2E] -will be released soon- — Playwright, because unit tests lie
  • [Performance] -will be released soon- — k6, Lighthouse, catching slowdowns early

The Workflow — How human and AI actually collaborate

  • [TDD as AI Control Loop] -will be released soon- — Let the machine iterate
  • [The Art of Prompting] -will be released soon- — How to ask so AI delivers
  • [Context for AI] -will be released soon- — CLAUDE.md and why documentation matters again
  • [The Human Filter] -will be released soon- — Code review pitfalls and what actually matters

The Uncomfortable Truth

AI is here. It's not going away. It writes code faster than you. That's a fact.

It also hallucinates, forgets context, and confidently breaks things. That's also a fact.

You can fight it, ignore it, or learn to work with it. I chose option three.

I define what success looks like. AI figures out how to get there. I bring the intent, AI brings the execution. AI has the horsepower. The pipeline keeps it on the road, channeling all that power in the right direction.

You can review every line the AI writes. Or you can build a system where it doesn't matter if you miss something.

I built the system. Now I get my evenings back. Netflix isn't going to watch itself.


Next up: Boring Is a Feature — Choosing the weapons that AI actually knows how to use.

Top comments (0)