A recent article about Claude Code 2.1 made waves in the engineering community, though perhaps for the wrong reasons. Jaana Dogan, a Principal Engineer at Google, reportedly replicated in one hour what her team had previously spent a year building. She did this by feeding the model a "three-paragraph description" containing the "best ideas that survived" from that year of work.
Most commentary focused on the raw velocity—the idea that AI is a "100x multiplier." This interpretation misses the fundamental shift occurring in our discipline.
Dogan didn't perform magic, nor did the AI simply "code faster." The year her team spent wasn't wasted; it was the compression phase. They spent twelve months reducing a vast problem space into a high-entropy, low-noise signal. The AI simply acted as the decompressor.
This suggests a new mental model for software engineering: we are moving from an era of creation to an era of information management.
The Lossy Stochastic Compressor
For decades, we viewed code as the primary asset. In this new paradigm, code is merely a derived artifact—a "build target" generated from a higher-level source.
Think of an AI coding agent not as a junior developer, but as a lossy stochastic compressor, similar to a JPEG encoder or an Autoencoder in deep learning.
- The Input (Prompt/Context): This is the compressed file. It contains the intent, the constraints, and the architectural boundaries.
- The Model: The decompression algorithm.
- The Output (Code): The reconstructed image.
The quality of the output depends entirely on the information density of the input. A vague prompt ("Make me a snake game") is low-entropy; the model fills the gaps with statistical noise (hallucinations or generic boilerplate). This is a "blurry" image.
Conversely, a rigorous technical specification—the result of deep engineering thought—approaches lossless compression. The model has zero freedom to "be creative" with the implementation details because the constraints are so tight. The resulting code is sharp, functional, and correct.
The New Stack: Stochastic Compilation
We have spent fifty years raising the abstraction level of programming languages to match human thought processes: Assembly → C → Python.
We have now arrived at the final abstraction layer: Natural Language (and Intent).
The Large Language Model is effectively a Stochastic Compiler. It takes natural language input and compiles it into deterministic source code (Python, C++, Rust), which is then processed by a traditional compiler into machine code.
The emerging stack looks like this:
Human Intent (High-Level Constraints / Diagrams)
↓
Stochastic Compiler (LLM / Agent)
↓
Deterministic Source (The "Ephemeral" Artifact)
↓
Binary / Executable
In this stack, the "Source Code" (Step 3) becomes analogous to Assembly or Intermediate Representation (IR) in LLVM. It is readable if you need to debug it, but you shouldn't be writing it by hand unless you are optimizing the last mile of performance.
The dialogue with the agent—the iterative refinement of constraints—is the true source code. The summary document generated at the end of a session is the "commit."
The Collapse of Traditional Rituals
This shift breaks the coordination models we have used since the late 90s.
The Bureaucracy of the Pull Request
The Pull Request (PR) was designed for a world where writing code was slow and reading it was fast (relatively speaking). It assumes that value comes from a human mind verifying the syntax and logic of another human mind.
Mike Krieger, Anthropic's CPO, noted that "bottlenecks have shifted from engineering (writing code) to decision-making (what to build) and merge queues." Boris Cherny, Claude Code's creator, reportedly runs five Claude instances in parallel, treating coding "more like Starcraft than traditional development."
When an orchestrator runs five agents in parallel, generating thousands of lines of code per hour, the PR becomes a bottleneck. You cannot meaningfully review that volume of code line-by-line without slowing the process to a crawl.
Review shifts from syntax verification to behavioral verification:
- Does the module pass the integration tests?
- Does the API contract hold?
- Does the system behave as intended in the simulator?
We are moving toward Black Box Reviewing. We care less about how the sort function was implemented (assuming the complexity is correct) and more about whether it sorts correctly within the system boundaries.
From Agile to Supervised Fast Waterfall
Agile methodologies (Sprints, Scrums) exist to manage uncertainty in implementation. We iterate because we don't know how long it will take to build a feature.
When implementation becomes near-instant, the uncertainty moves upstream to Design.
We are seeing the emergence of a Supervised Fast Waterfall:
Day 0 (Morning) — Design:
- Team around a digital whiteboard
- Architecture drawn (UML 2.0 or similar)
- Subsystem boundaries and API contracts defined
- While discussing, agents already begin skeleton generation
Day 0 (Afternoon) — Generation:
- First code skeletons exist
- Agents flesh out implementation in parallel
Days 3-5 — Integration:
- Functional demos of subsystems
- Integration meeting: APIs connected
- Ensemble debugging and behavioral verification
Days 7-10 — Iteration:
- First integrable version
- Loop back to Design for refinements
You cannot iterate your way out of a bad architecture when the code is being generated at Mach speed. You must design first.
The Architect-Integrator and the Return of UML
There is a supreme irony in this revolution: it brings us back to the roots of "Engineering."
The role of the human shifts from "Bricklayer" to "Architect-Integrator":
- The Architect: Defines what to build.
- The Integrator: Ensures the pieces fit together.
Large projects decompose differently now. Subsystems communicate via APIs. Agents and their orchestrators build the modules. Humans assemble, because they have persistent memory across projects and a capacity for big-picture vision that exceeds any context window.
This necessitates a revival of formal modeling tools—potentially a lean version of UML.
UML "died" because keeping diagrams synchronized with code was a manual nightmare. The code was the source of truth, and the diagram was always outdated.
In the new paradigm, the relationship flips. The Diagram is the Source of Truth. If the code drifts, you don't fix the code; you update the diagram and regenerate the code.
There's also a cognitive argument. For humans, visual topology is processed in parallel (visual cortex). Code is processed serially (reading). When managing a system of 50 agent-generated microservices, the text is illegible. The topology map is the only way a human can maintain a mental model of the system.
So perhaps:
- UML/Diagrams → the architect's language, for thinking and coordination
- Natural Language → the interface toward agents
- Code → generated artifact, nearly ephemeral
The diagram becomes the true high-level source code.
The Grit: The "Leaky Abstraction" Problem
It is crucial not to romanticize this transition. We are trading one set of problems for another.
The danger of the "Stochastic Compiler" is that it introduces probabilistic failure modes into deterministic systems.
- In a traditional compiler, an error is a syntax error. It halts.
- In a stochastic compiler, an error is a hallucination. It compiles, it runs, but it does something subtly wrong.
In web development, a hallucination might mean a button is the wrong color. In robotics or embedded systems—where the "Real World" is the ultimate integration test—a hallucination can mean physical damage.
This creates a new friction: Debugging the Intent.
When the system fails, do you patch the generated Python code (breaking the link to the generator), or do you spend hours "prompt engineering" to get the model to understand the constraint?
The best engineers of the next decade will be those who know when to trust the compressor, and when to break the glass, open the hood, and write the critical path in manual C++ because the abstraction is leaking.
Conclusion
The "90% of coding is gone" headlines are misleading. The cognitive load hasn't disappeared; it has been displaced.
We are entering a period where the barrier to entry for building software is lower than ever, but the barrier to mastery is higher. You can no longer rely on rote memorization of syntax to provide value. You must understand systems, boundaries, and architecture.
The terminal isn't dead. But the way we type into it has changed forever. The prompt is the new syntax, and the diagram is the new code.
This article emerged from a brainstorming conversation exploring the implications of AI coding tools, then was cross-reviewed by another model, with a human integrating the results. The process itself—orchestration, generation, cross-review, integration—mirrors exactly the paradigm it describes. Perhaps that's the point.
Top comments (0)