DEV Community

The TDD Trap: How Test-First Becomes Bad Design Forever in Most Teams

Leon Pennings on November 26, 2025

1. Introduction: The Myth of Emergent Design Test-Driven Development (TDD) promises a simple, seductive idea: If you write tests first...
Collapse
 
xwero profile image
david duymelinck

I think the core of the post is:

Preserve the wrong design to keep the tests green
or
Break the tests (often hundreds of them) to fix the model.

If you are not willing to break the tests, the application is not advancing.

Instead of write tests first, I have a write tests as early as possible attitude.

If I'm still exploring the domain, broken tests or throwing away loads of tests are part of the progress.

Software is not like a house, you can replace load bearing walls if you have a good strategy.

Collapse
 
leonpennings profile image
Leon Pennings

The core of the article is that tests are for testing, not for designing. It's like using a fork to eat soup - wrong tool for the job.
As long as tests are used to protect non-negotiable behaviour, they’re an excellent tool.
If they’re used to avoid or replace proper domain modelling, then they’re being misapplied.

Collapse
 
xwero profile image
david duymelinck • Edited

You are right. But to be honest I never had the idea using TDD as a modeling tool.
First you model, then you write the tests that support the model. Then write the code.
When the model changes, write the tests for the updated model and write the code.

Treating tests like they are written in stone, is the worst thing you can do to an application. That is why I highlighted that part as the core.
I don't think TDD is to blame for people not willing to put in the work.

Thread Thread
 
leonpennings profile image
Leon Pennings

Yes, true — the fundamental goal of TDD is to fulfill the behavior defined by the tests. That’s why TDD isn’t inherently a design driver: it simply tells you to make red → green.

The problem arises when “green” is treated as a definition of done. The continuous design of a rich domain model becomes invisible in the process and is all too easily skipped. That’s why the risk of treating TDD as a false idol is very real.

In my experience, rich domain models don’t suit TDD very well. Implementing the model is part of discovering and learning about it, and there’s often no single “behavior” to capture at the start — writing lots of tests for evolving domains just isn’t practical.

Thread Thread
 
xwero profile image
david duymelinck

The problem arises when “green” is treated as a definition of done.

Isn't that another way of saying don't change the tests?

writing lots of tests for evolving domains just isn’t practical.

How can you write lots of tests for an evolving domain? You can only write tests for the parts you know.
What is the chance the base functionality of an evolving domain changes?

Thread Thread
 
leonpennings profile image
Leon Pennings

No, it’s not another way of saying “don’t change the tests.”
What I mean is that the initial implementation is often treated as complete the moment the tests turn green. The task becomes “fulfill the user story,” not “first understand the domain.” The ongoing design of a rich domain model becomes invisible, and that’s where the risk lies.

When I say a domain is evolving, I mean our understanding of the domain is evolving. Early on you’re not just coding entities; you’re discovering invariants, boundaries, policies, and relationships. Entities are domain objects, yes — but the domain model is much more than its entities, and those other parts tend to shift significantly as insights emerge.

Because of that, early tests rarely survive long in rich models. They’re based on the first, shallowest interpretation of the domain, so they end up fossilizing assumptions that later turn out to be wrong.

Trivial or low-level tests don’t help much here. They almost never catch real bugs, but they add friction and rework whenever the model evolves. The only tests that remain valuable are the ones that protect non-negotiable, high-level business behaviour.

If by “tests” you mean the business-facing, domain-level ones, then yes — those stay stable. But classic TDD’s fine-grained, implementation-first testing simply doesn’t match how rich domain models evolve. It just multiplies the amount of work every time the domain deepens or changes.

Collapse
 
david_sporn_9688d10d7734e profile image
David Sporn

Fundamentally so true.

TDD can just verify that code modifications do not break covered existing behaviours ; if written before writing operational code, it can prove that the modification complies to given specifications ; and thus it serves as an element of proof that the job is "done". And that's already a great thing, one don't need to "believe me", the tests are there for that.

Design emerge when software engineer :

  • are fed with enough specifications to have a glimpse of the big pictures
  • ask questions to challenges the specs and refine them ; the more one has worked on diverse projects and domain, the easier to identify and ask the better questions
  • are generally given time to reflect on their work as a whole, anticipate probable direction of the project, so that they can provisions some architecture groundwork that will nudge that so elusive design to emerge.

Or in short, the right design emerge when software engineer have a clear idea of the big picture, instead of a little snippet.

Collapse
 
leonpennings profile image
Leon Pennings

You’ve hit the nail on the head. I completely agree — developers are most effective when they truly understand the business domain. Treat them like assembly-line workers, implementing snippets without seeing the bigger picture, and you end up with bloated applications, technical debt, and fragile designs. Rich domain modelling requires discovery and understanding, not just executing “red → green.” In my experience, TDD and large test suites often only add to the bloat. There’s no real shortcut for ensuring developers deeply understand the business domain.