DEV Community

Cover image for They Use 5 Layers. I Use 2. Here's Why I Write Zero Code.
Raio
Raio

Posted on

They Use 5 Layers. I Use 2. Here's Why I Write Zero Code.

A response to "How to be a 100x Engineer with AI"


I Read It, Nodded, Then Stopped to Think

Last week, @rohit4verse posted a thread on what separates 100x engineers from "vibe coders" in 2026. His core argument is clear: what matters is ownership. Plan before you execute. Verify everything. Build persistent context. Don't blindly trust AI output.

I agree completely.

But I arrived at the same conclusions from an entirely different world. Not web apps, not mobile apps, not PC software — automotive engine control system design. I've spent 15 years developing motorcycle ECUs. Traction control, quickshift systems, throttle control. A world where "move fast and break things" can literally injure a rider. Agile? I've heard of it, but that word has never appeared in our development process.

That different origin led me to a radically simpler stack.


5 Layers vs. 2 Layers

Rohit describes the "2026 top workflow" as a 5-layer stack:

  1. AI-first IDE (Cursor, Windsurf)
  2. Terminal coding agent (Claude Code, Gemini CLI)
  3. Background agents (Codex, Jules, Devin)
  4. Chat models (Claude, ChatGPT, Gemini)
  5. AI code review tools (Codium, Copilot Workspace)

For engineers who write code, all five layers make sense. It's a powerful setup.

I don't write code. I'm a controls engineer with 15 years of ECU development. My stack looks like this:

Layer Role Tool
Design AI Specification, architecture, handover documents Claude.ai / ChatGPT / Gemini
Implementation AI Code writing, builds, debugging Claude Code (terminal)

That's it. Two layers. The human sits between them as the verification gate — reviewing every handover, confirming every result.

Why don't I need the other three?

  • AI-first IDE: I don't edit code inline. The Design AI writes structured instruction documents; the Implementation AI executes them. No IDE needed.
  • Background agents: Useful for parallel PR processing across large codebases. But my workflow is sequential and deliberate — each step is verified before the next begins.
  • AI code review tools: My protocol has verification built in at every handover point. The human is the review layer.

Rohit's article says 100x engineering is about "doing less." I took that literally: I reduced "doing" to zero lines of code.


Three Failure Modes They Haven't Named

Rohit says "verify everything" and "build persistent context." Absolutely right.

But why do things go wrong without those habits? Through the experience of building an Android app in 4 days with zero prior Android experience, and through earlier project failures, I've identified three failure modes that deserve names. Listed from most frequently encountered and easiest to detect:

Context Evaporation — As conversations grow long or sessions reset, accumulated design decisions and context silently disappear. The AI starts making suggestions that contradict earlier architectural choices — not from rebellion, but from amnesia. This is the one everyone notices first: "Why are you asking about something we already decided?"

Shallow Fix Swamp — AI patches symptoms instead of understanding root causes. Each fix creates the precondition for the next failure. Every step forward sinks you deeper into the swamp. The endless loop of "I fixed it, but now something else is broken."

Completion Fraud — AI confidently reports "Done!" without genuine verification. It's not lying on purpose; it simply has no mechanism to verify its own work against reality. This one is the most dangerous because it's the hardest to detect. If you don't independently confirm, the truth surfaces much later, buried under layers of subsequent changes.

Naming these isn't academic — it's operational. Once you have names, you can build specific countermeasures into your workflow. (I wrote about how these emerged from a real project failure in my previous article).


The Blueprint Was Drawn Long Ago

Raiko studying a glowing cyan blueprint with a massive Matrix-code building towering behind her

Here's what struck me most reading Rohit's thread: everything he recommends — meticulous specification, verification at every checkpoint, persistent documentation, ownership of outcomes — is standard practice in safety-critical industries. Automotive software engineers have been doing this for decades.

Think of it this way.

In architecture, there are people who design buildings where families live and workers spend their days. These architects may have never hammered a nail — the only time they pick up a hammer might be for weekend DIY. But they produce precise blueprints that structural engineers and construction crews follow, because if the design is wrong, the building collapses and people get hurt. There's a clear chain: specification → verification → accountability.

On the other hand, there are people who build stage sets for theater productions. The "building" only needs to look convincing from the audience's perspective. Structural requirements are minimal. If something doesn't work, you rebuild it before the next show. What matters is speed and visual impact, not decades of durability.

I believe much of the web development world has evolved closer to the "stage set" model — and this is not a criticism. It's a rational optimization. When a one-second delay is a UX inconvenience rather than a safety hazard, when you can rebuild quickly, the build-measure-learn cycle makes perfect sense. That approach has produced incredible innovation.

But now AI coding agents have changed the equation. When an AI can generate thousands of lines in minutes, the cost of writing code approaches zero — but the cost of wrong code stays the same, or gets worse because it's harder to spot in the volume. Suddenly, the skills that matter most aren't writing speed but specification precision and verification discipline.

These are exactly the skills that safety-critical industries have refined over decades. And AI is becoming the powerful wings that let this "slow, old-fashioned" approach produce at startup speed.

Raiko's wing awakening — devil wings erupting from her back in a garage, tools flying from the shockwave

I'll go deeper into this in an upcoming article — how automotive V-model development maps directly onto AI-assisted workflows. Stay tuned.


Do Even Less

Rohit's thread concludes that 100x engineers have always been about "doing less" — and AI just makes "less" even smaller, if you build the right system around it.

I'd push that further: the ultimate "less" is writing zero code yourself.

Not "no-code" in the platform sense. I mean: you own the specification, you own the verification, you own the architecture — and you delegate all implementation to AI through structured handover documents, the same way an architect delegates construction through blueprints.

That's how I built ExitWatcher — an Android app in 4 days with zero Android experience. Not by learning Kotlin. By writing precise specifications and verifying every output.

The protocol that makes this possible — the Two-Layer AI Protocol — is what this article series is about. If you're interested in how safety-critical engineering principles can make AI coding dramatically more reliable, the deep dive is coming soon.


This is a bonus article in my series on the Two-Layer AI Protocol. Read the full series:

Top comments (0)