You might think I'm crazy with such statement but if I had to summarize what my experience is, that is the only thing I can think about.
Let's go through some story of my life, shall we?
The past
I started my career on a pretty big product. At first, I couldn't understand much but after a few months it became apparent what the software development flow was:
- Customer/product came up with a Change Request (a word document containing one or more product features).
- Product Architects from different areas would analyze the CR document and find out if their areas could be impacted. Interestingly they would do that on a meeting where each could validate their assumptions.
- PA would then write a High-Level Architecture Document, which would link to the CR document, and describe what is the effect of the requirement against a particular part of the system.
- The HLD would then move to tech leads + seniors for review, clarify anything doubts and get a feel for the code involved. The outcome of this was a Detailed Design document. This could have a few pseudo-code ideas and even the exact change to be made.
- Then a developer would pick it up, implement, design unit tests, test and mark task as delivered.
Ok, so, classic waterfall process but apart from the developer testing it, how come was that different than most businesses do today?
They had a separate QA team. But not a simple QA team, they knew the system from top to bottom. I didn't mention, on purpose, that they were also part of the PA meeting and also played a critical in creating tests that were cross areas.
It was always expected for developers to validate their changes in a level that QA's do today. You should indeed check for all the permutations of the modified code and take your time testing them. In doubt? Two things: simplify your solution or test the hell of it.
The QA team would have their own test process, they didn't work close to developers because they weren't worried about the ifs or elses. They always had a holistic view of the system. They knew that if devs are touch the customer registration flow, they would test that with everything they knew!
Back to now
What is different now? Why does it feel that systems nowadays are flaky and with every new version there's always something broken?
I don't have the-all-seeing-eye, but in my past experiences it is clear that we have less and less testing involved as part of the Software Development Life Cycle and another critical point: software companies are leaner than ever!
We now have tools to automate everything. We can easily create tests in all levels, in a matter of seconds! Thanks Claude Code. But why are we still lacking confidence to deploy to production when we make a change?
Problem(s)
Tests were never about exercising the code. It's not about clicking around and make sure your breakpoint gets hit and nothing breaks.
It's not getting that 80% coverage and testing every single branching of logic.
It's also not about writing unit tests where you control every single step of the test. We have test abusers, leveraging Moq and all sort of tools for that.
I've seen more and more code worried about applying the DRY principle and they forget that tests should be simple, visible and about validating our understanding of how things should work.
(possible) Solution
Tests exist to validate our assumptions about something. Because of that, make sure to write tests that:
- Are easy to understand
- Have meaning and clear intent
- Asserts everything (and more!!!) and explain why
- You choose the right type of test for your scenario
Conclusion
Let me show you what I mean with some concrete examples.
Bad: Testing code branches, not assumptions
[Fact]
public void Register_WhenEmailIsNull_ThrowsException()
{
var mockRepo = new Mock<IUserRepository>();
var mockEmail = new Mock<IEmailService>();
var mockLogger = new Mock<ILogger<RegistrationService>>();
var sut = new RegistrationService(mockRepo.Object, mockEmail.Object, mockLogger.Object);
Assert.Throws<ArgumentNullException>(() => sut.Register(null, "password123"));
}
This test mocks everything, exercises one branch, and tells you nothing about whether registration actually works.
It's testing that an if statement exists.
Better: Validating your understanding of the behavior
[Fact]
public void New_user_can_register_and_receives_welcome_email()
{
// Arrange - use real components; only fake what crosses system boundaries
var db = new TestDatabase();
var emailInbox = new FakeEmailInbox();
var service = new RegistrationService(
new UserRepository(db.Connection),
new FakeEmailService(emailInbox),
NullLogger<RegistrationService>.Instance);
// Act
var result = service.Register("jane@example.com", "SecureP@ss1");
// Assert — each assertion documents a business expectation
Assert.True(result.Succeeded, "Registration should succeed for a valid new user");
var savedUser = db.Query<User>("SELECT * FROM Users WHERE Email = @Email",
new { Email = "jane@example.com" });
Assert.NotNull(savedUser); // The user must actually be persisted
Assert.True(savedUser.IsActive, "New users should be active immediately");
var email = emailInbox.Single();
Assert.Equal("jane@example.com", email.To);
Assert.Contains("Welcome", email.Subject); // We promised a welcome email
}
Notice the difference. This test reads like a specification: a new user registers, gets persisted, is active, and receives a welcome email. Every assertion has intent. If any of these break, you know exactly which assumption was wrong.
Testing across boundaries, like that QA team did:
[Fact]
public void Updating_product_price_does_not_change_price_in_existing_carts()
{
var db = new TestDatabase();
var productId = db.Insert(new Product { Name = "Widget", Price = 9.99m });
var cartId = db.Insert(new Cart { CustomerId = 42 });
db.Insert(new CartItem { CartId = cartId, ProductId = productId, PriceAtTimeOfAdd = 9.99m });
var catalog = new CatalogService(new ProductRepository(db.Connection));
catalog.UpdatePrice(productId, newPrice: 14.99m);
// Product price updated
var product = db.Query<Product>(productId);
Assert.Equal(14.99m, product.Price);
// But the cart must be untouched — the customer saw 9.99 when they added it
var cartItem = db.Query<CartItem>("WHERE CartId = @CartId", new { CartId = cartId });
Assert.Equal(9.99m, cartItem.PriceAtTimeOfAdd,
"Carts must snapshot the price at add-time; a catalog update should never silently change what a customer
expects to pay");
}
That QA team from my past would have caught this. They knew that if you touch the catalog, you check the cart, checkout, invoicing — everything downstream. Two different teams own these areas, and neither writes tests for the other's assumptions. That's exactly where production bugs hide.
So, what should you take away?
Stop relying on coverage.
Stop mocking everything.
Start writing tests that describe what your system promises to do!
Verify those promises end to end.
If your test reads like a spec that a new team member could understand on day one, you're on the right track.
Tests are not about exercising code. They're about proving you understand what your software is supposed to do and that it actually does it.
Top comments (0)