DEV Community

Cover image for The Irony of AI Development: How Context Engineering Is Taking Us Back to Waterfall
Keith MacKay
Keith MacKay

Posted on • Originally published at tlcmentor.substack.com

The Irony of AI Development: How Context Engineering Is Taking Us Back to Waterfall

The Irony of AI Development: How Context Engineering Is Taking Us Back to Waterfall

And Why That's Not Necessarily a Bad Thing

For three decades, the software industry has been on a journey away from waterfall development toward agile methodologies. Now, in an unexpected twist, the rise of AI-powered development tools and "context engineering" is quietly pushing us back toward sequential, specification-heavy workflows.

But this time, we're walking into a trap we've seen before—the waterbed problem. You must tackle this strategically and head-on in order to recognize AI efficiencies—otherwise AI acceleration will create more chaos than efficiency.


A Brief History: From Waterfall to Agile

The Waterfall Era (1970s-1990s)

Waterfall development emerged from manufacturing and engineering disciplines. The model was simple: define requirements completely, design the system, build it, test it, deploy it. Each phase flowed into the next like water over a cascade.

The approach made sense for its time. Computing was expensive. Mistakes were costly. The assumption was that thorough upfront planning would prevent downstream problems.

It didn't work out that way. Projects routinely ran over budget and behind schedule. By the time software shipped, requirements had changed. The market had moved. A running joke about large enterprise systems was that they were a perfect fit for the company...as of 18 months ago!

The Agile Revolution (2001-2020s)

The Agile Manifesto was a direct response to waterfall's failures. Its core insight: in complex, uncertain environments, you can't plan your way to success. You must iterate, learn, and adapt.

Agile shortened feedback loops. Instead of 18-month cycles, teams delivered working software in weeks. Requirements became conversations rather than contracts. Testing happened continuously, not just at the end.

The results spoke for themselves. Agile teams shipped faster, responded to change better, and produced software that more closely matched what users actually needed.

Note that there were exceptions where waterfall still made sense, like embedded software that needed to be tested against evolving hardware, or highly regulated industries.

For the most part, however, for two decades the industry consensus has been clear: agile beats waterfall. Iterate fast. Embrace uncertainty. Deliver incrementally.


Enter Context Engineering: The Return of the Specification

Now something interesting is happening.

The most effective AI-assisted development doesn't look like agile at all. It looks remarkably like waterfall.

When developers work with large language models like Claude or GPT-4, they quickly discover a pattern: the quality of the output is directly proportional to the quality of the input. Vague prompts produce vague code. Detailed specifications produce useful implementations.

This has given rise to "context engineering"—the practice of carefully crafting the information, constraints, and examples you provide to AI systems. Context engineering is essentially specification writing for machines.

The parallels to waterfall are striking:

  • Upfront investment in specification: Before touching code, developers spend significant time writing detailed requirements, examples, and constraints
  • Sequential phases: Define the context, generate the code, review the output, refine the specification
  • Heavy documentation: The context window has become the new requirements document

The irony is profound. After decades of moving away from heavy upfront specification, we're returning to it—not because humans need it, but because AI does.


The Waterbed Problem Returns

Here's where things get dangerous.

In engineering, the "waterbed problem" describes a phenomenon where compressing one part of a system creates pressure elsewhere. Push down on a waterbed here, it bulges up over there. You can't eliminate the complexity; you can only move it around.

AI development tools are creating exactly this dynamic.

The Math Is Merciless

Consider the numbers that are now being thrown around:

  • AI can generate code 10x to 100x faster than manual development
  • A single developer can now produce the output of a small team
  • Features that took weeks now take hours

This sounds like pure upside. It isn't.

If development speed increases 100x, what happens to testing? Does your QA capacity magically scale by 100x? What about code review? Security audits? Documentation? Integration testing?

The answer, of course, is that you've simply moved the bottleneck.

Where the Pressure Goes

When you compress development time through AI, the pressure shows up in predictable places:

  1. Testing: AI-generated code requires testing—often more testing than human-written code, because AI systems can produce subtle bugs that humans wouldn't make
  2. Review: Someone still needs to verify that the code does what it should, follows security best practices, integrates properly with existing systems, and provides a clear, useful user experience for its users
  3. Architecture: Faster code generation means architectural decisions come faster, with less time for deliberation
  4. Requirements: If you can implement anything quickly, choosing what to implement becomes the constraint
  5. Operations: More code shipping faster means more deployments, more incidents, more maintenance
  6. User Absorption: Users need to be able to keep up with how to use their software, what features are available, and so forth

Organizations that accelerate development without accelerating everything else are merely building technical debt at an unprecedented rate. They're pushing on the waterbed.


The Whole-Lifecycle Imperative

The lesson is clear: AI tools cannot be applied effectively in isolation. They must be applied across the entire development lifecycle.

This Is Not Optional

If you're using AI to accelerate coding but relying on manual testing, you're setting yourself up for quality disasters. If you're generating code faster but reviewing it at the same pace, defects will slip through. If you're shipping features rapidly but operating infrastructure manually, you'll drown in incidents.

The math doesn't work any other way.

What Whole-Lifecycle AI Looks Like

Organizations that successfully navigate this transition are applying AI comprehensively:

  • AI-assisted specification: Using AI to help write, validate, and refine requirements
  • AI-accelerated development: Code generation, completion, and transformation
  • AI-powered testing: Automated test generation, coverage analysis, and regression detection
  • AI-enhanced review: Automated code review, security scanning, and compliance checking
  • AI-driven operations: Incident detection, root cause analysis, and automated remediation
  • AI-supported architecture: Design review, pattern matching, and technical debt detection

The key insight: the acceleration ratio must be roughly consistent across all phases. If development gets 100x faster, testing needs to get close to 100x faster. Otherwise, testing becomes the bottleneck.

Your overall throughput is gated by your slowest phase.


Strategic Implications for Leaders

1. Don't Chase Point Solutions

The temptation is to start with the most visible opportunity—usually code generation—and optimize later. This is a mistake. Point solutions create imbalances. Imbalances create failures.

We have seen organizations begin learning how to use AI by implementing it in specific parts of the organization:

  • documentation (AI greatly reduces the key-person problem)
  • test creation (going from 0 automated tests to comprehensive automated testing, including integration and end-to-end tests, is low-risk, fast, and hugely valuable)
  • code review guidance (helping senior engineers more quickly zero in on the biggest challenges and learning opportunities from junior engineers to best use their valuable time)
  • tech debt evaluation (reviewing the code base, looking for future challenges)

These strategies each increase quality and provide longer-term value, but they don't radically affect the speed of the software lifecycle, and, once optimized, they don't provide the same ongoing value. These are great mechanisms to leap to a higher level of maturity, but different solutions are required to maintain this new posture going forward.

A different long-term approach is to start with a comprehensive view of your development lifecycle. Identify every phase where work happens. Map the current throughput of each. Then invest in AI capabilities for each phase.

2. Measure Throughput, Not Activity

It's easy to celebrate when developers report 10x productivity improvements. But developer productivity is not organizational throughput. If testing becomes the bottleneck, you haven't improved throughput—you've just moved work in progress from one queue to another.

Measure end-to-end cycle time. Measure defect rates. Measure incidents. These metrics tell you whether you're actually moving faster or just generating more chaos.

3. Rethink Team Structure

Traditional team structures assumed human-speed development. Ratios of developers to QA engineers, code reviewers to developers, ops engineers to services—all of these were calibrated to pre-AI velocities.

Those ratios no longer hold. Organizations need to fundamentally reconsider how work is distributed across roles when development velocity changes by an order of magnitude.

4. Embrace the New Waterfall—Thoughtfully

Context engineering and specification-heavy development aren't bad. They represent the right way to work with current AI capabilities. The key is to bring the benefits of agile thinking—fast feedback, iteration, continuous integration—to this new paradigm.

Write specifications, but test them quickly. Generate code, but review it immediately. Ship features, but instrument them comprehensively. The phases may be more sequential than agile purists would like, but the cycles can still be fast.

And one of the fundamental pillars of agile development -- frequent communication -- still adds tremendous value in context engineering. This communication is both agent-to-agent and human-agent communication via status and spec files and prompts. Frequent human-in-the-loop review is still required at every phase to make sure that systems are behaving as expected, but AI can be used to make sure these reviews are as streamlined and efficient as possible. "Trust but verify" is good policy.


The Path Forward

We're at an inflection point. AI tools offer genuine productivity improvements, but they also create genuine risks. The organizations that succeed will be those that:

  1. Recognize that AI acceleration must be applied holistically
  2. Invest proportionally across the entire development lifecycle
  3. Measure system throughput rather than local optimization
  4. Adapt their organizational structures to new velocity assumptions
  5. Embrace specification-heavy approaches without abandoning fast feedback

The waterbed problem isn't new. Neither is the tendency to optimize locally while ignoring systemic effects. But the stakes are higher now. AI acceleration is too powerful to apply carelessly.

The choice isn't whether to adopt AI development tools. That's already inevitable. The choice is whether to adopt them strategically—across the whole lifecycle, in proper proportion, with clear-eyed understanding of the tradeoffs.

Push on the waterbed intelligently, or watch it bulge in unexpected and costly places.

Top comments (0)