DEV Community

Cover image for When Tests Keep Passing, but Design Stops Moving
Felix Asher
Felix Asher

Posted on

When Tests Keep Passing, but Design Stops Moving

There’s a moment I keep running into when practicing TDD.

Tests pass.

Coverage improves.

Refactoring feels safe.

And yet, at some point, the design just… stops moving.

Not because the system is finished.

Not because the problem is solved.

But because the tests no longer seem to challenge anything.

They keep confirming decisions that already feel locked in.


The Stall Is Quiet

This stall is subtle.

Nothing is obviously wrong.
CI is green.
The codebase looks “healthy.”

But learning has slowed to a crawl.

Every new test feels inevitable.
Every refactor feels local.
No assumption feels truly at risk anymore.

The system still changes — but only within boundaries that were set long ago.


When Tests Stop Reducing Doubt

I don’t think this happens because we write tests incorrectly.

It happens when tests quietly shift roles.

At first, tests are exploratory.
They surface uncertainty.
They force us to articulate expectations we’re not sure about yet.

But over time, many tests become something else:
a correctness check for decisions that are no longer questioned.

When that happens, a green test doesn’t reduce doubt anymore.
It just confirms structure.

The feedback loop is still running —
but it’s no longer teaching us anything new.


Assumptions That Settle Too Early

Looking back, what surprised me wasn’t that the design stalled,
but how early some assumptions had solidified.

Often, they weren’t explicitly designed.
They just became embedded:

  • in class boundaries
  • in data shapes
  • in integration points
  • in what tests bothered to assert — and what they didn’t

Once those assumptions were encoded deeply enough,
tests started protecting them instead of questioning them.

At that point, design work hadn’t stopped —
it had just moved out of sight.


An Uncomfortable Experiment

That realization pushed me toward some uncomfortable experiments.

I started writing tests that cut end-to-end much earlier than felt reasonable —
sometimes before I felt I “understood the design.”

I tried to think less in terms of features,
and more in terms of invariants:
what must never break, no matter how the system evolves.

And I began paying attention to what actually changed for me
when a test turned green.

Often, it wasn’t confidence in correctness.

It was the moment when I no longer felt the need
to question a particular assumption.


Integration Tests as Pressure, Not Coverage

This isn’t an argument for more tests,
or even for integration tests as a category.

What mattered wasn’t where the test lived,
but what kind of pressure it applied.

Some tests only verify that parts fit together.
Others force assumptions to collide with reality:
concurrency, timing, data consistency, failure modes.

Those collisions are uncomfortable.
They’re slow.
They’re hard to mock away.

But they’re also where design tends to move again.


This Isn’t About Replacing Design

I want to be clear about what this is not.

This is not an argument that tests should replace design work.
And it’s not a claim that thinking upfront is useless.

It’s an observation that some assumptions look stable on paper —
and only reveal their fragility when exercised across real boundaries.

In those cases, certain tests don’t substitute for design.
They expose where design was incomplete, optimistic, or simply wrong.


Why This Matters More with AI

Recently, this question has felt more urgent.

With AI-assisted coding, it’s trivial to generate:

  • plausible implementations
  • passing tests
  • convincing refactors

Green becomes cheap.

Which makes it even easier for assumptions to settle unnoticed.

If tests only confirm structure,
AI can help us reach that plateau faster —
without helping us notice that learning has stopped.

In that context, the question isn’t
“Can AI write correct code?”

It’s:
Who is still responsible for noticing when design stops moving?


Design as Ongoing Discovery

What I’m circling around isn’t a new methodology.

It’s a reframing.

Design isn’t a phase that precedes tests.
And tests aren’t just a safety net for finished ideas.

In their most useful form, tests are instruments of doubt.
They mark which assumptions we still need to question —
and which ones we can afford to stop worrying about.

When tests stop doing that,
the stall isn’t a failure of discipline.

It’s a signal.

And the work is to find a way
to make assumptions dangerous again.


Closing Thought

A green test doesn’t mean “this part is correct.”

Sometimes, it just means:
this assumption has quietly stopped being questioned.

When design stops moving,
the most important thing may not be writing better tests —
but finding ways to let uncertainty back in.

That’s the part I’m still trying to understand.

Top comments (0)