DEV Community

Sahil Sahu
Sahil Sahu

Posted on

I Stopped Writing Tests and My Code Got Better

Yeah, I said it. Come at me.

Before you start typing that angry comment, hear me out. I'm not saying "don't test." I'm saying I stopped writing tests the way everyone tells you to.

The Problem Nobody Talks About

For years, I followed the gospel: Write tests first. Test everything. 100% coverage. TDD or bust.

My codebase had 3,247 tests. Coverage was 94%. CI took 23 minutes to run. I felt like a responsible adult developer.

Then I shipped a bug that wiped out $12k worth of data.

The tests? All green. ✅

What Actually Happened

The bug was simple: An edge case in our payment processing where users could submit the same transaction twice within 50ms. Race condition. Classic.

Why didn't the tests catch it? Because I tested what I thought about, not what actually breaks.

Our test suite was massive, but it was testing:

  • Happy paths (90% of tests)
  • Edge cases I imagined (9% of tests)
  • Whatever got me to 94% coverage (1% of tests)

Zero tests for the actual user behavior that broke things.

The Uncomfortable Truth

Most tests are just checking if functions return what you told them to return. That's not testing, that's just... writing the same logic twice.

// My old tests looked like this
describe('calculateTotal', () => {
  it('should add tax to subtotal', () => {
    expect(calculateTotal(100, 0.1)).toBe(110);
  });
});

// Cool story. But did I test:
// - What if subtotal is negative?
// - What if tax is a string "10%"?
// - What if this runs 1000 times per second?
// - What if the user's locale formats numbers differently?
Enter fullscreen mode Exit fullscreen mode

What I Do Now Instead

1. I Write Fewer, Better Tests

Instead of 3,247 tests, I have about 400. But these 400 tests are mean.

// New style: Test like users actually break things
describe('Payment processing under stress', () => {
  it('handles rapid duplicate submissions', async () => {
    const userId = 'test-user';
    const paymentData = { amount: 100, card: '4242...' };

    // Fire 10 identical requests simultaneously
    const promises = Array(10).fill(null).map(() => 
      processPayment(userId, paymentData)
    );

    const results = await Promise.all(promises);
    const successful = results.filter(r => r.success);

    // Only ONE should succeed
    expect(successful.length).toBe(1);
  });
});
Enter fullscreen mode Exit fullscreen mode

2. I Test Integration, Not Units

Unit tests are overrated. There, I said it again.

Your calculateTax() function works fine in isolation. But does it work when:

  • The database returns null
  • The API times out
  • The user's session expires mid-request
  • Redis goes down

That's what breaks in production. Not your pure functions.

3. I Use Property-Based Testing

This changed everything.

import fc from 'fast-check';

// Instead of testing specific cases, test properties
test('user input never causes crashes', () => {
  fc.assert(
    fc.property(
      fc.string(), // any string
      fc.integer(), // any integer
      fc.object(), // any object
      (name, age, metadata) => {
        // This should NEVER throw, no matter what garbage we pass
        expect(() => {
          createUser(name, age, metadata);
        }).not.toThrow();
      }
    )
  );
});
Enter fullscreen mode Exit fullscreen mode

This generates thousands of random test cases. It found 7 bugs in my code that I would NEVER have thought to test.

4. I Test in Production

Controversial? Maybe. Effective? Absolutely.

// Feature flags + monitoring = production tests
if (featureFlags.newPaymentFlow) {
  try {
    result = await newPaymentProcessor.process(payment);
    metrics.increment('new_payment_flow.success');
  } catch (error) {
    metrics.increment('new_payment_flow.error');
    logger.error('New payment flow failed', { error, payment });

    // Fallback to old flow
    result = await oldPaymentProcessor.process(payment);
  }
}
Enter fullscreen mode Exit fullscreen mode

I know in real-time if something's broken. My test environment never caught the issues that production monitoring does.

The Results

6 months after this switch:

  • Tests run in 4 minutes instead of 23
  • Found 3x more bugs before users did
  • Deployments went from scary to boring (in a good way)
  • Onboarding new devs is faster - they understand 400 good tests way easier than 3k mediocre ones

What I'm NOT Saying

I'm not saying "don't test." I'm saying:

❌ Stop writing tests just to hit coverage numbers

❌ Stop testing only happy paths

❌ Stop writing tests that just repeat your implementation

✅ Start testing like users actually use (and break) your app

✅ Start testing the integration points where things actually fail

✅ Start monitoring production like it's part of your test suite

The Backlash I'm Ready For

"But TDD!"

TDD is great for well-defined problems. But most of our work isn't well-defined. Requirements change. You'll rewrite those tests 5 times.

"But code coverage!"

Coverage tells you what you executed, not what you tested. 100% coverage with bad tests is worse than 60% coverage with good tests.

"But best practices!"

Best practices from 2010 don't apply to 2025 codebases. We have better tools now. Use them.

Try This Instead

For your next feature:

  1. Write ONE integration test that exercises the whole flow
  2. Add property-based tests for any user input
  3. Test the failure modes (timeouts, null responses, etc.)
  4. Add monitoring to catch what you missed
  5. Ship it

You'll find more bugs, write less code, and ship faster.

Your Turn

Am I completely wrong? Probably partially. Tell me why in the comments.

Already doing something like this? Share your approach. Let's learn from each other.

Still writing unit tests for getters and setters? I'm sorry for your loss.


Tools I Actually Use:

Hit that ❤️ if this made you question your test suite. Drop a 💀 if you think I'm about to get fired.

testing #controversial #webdev #javascript #devops

Top comments (13)

Collapse
 
xwero profile image
david duymelinck

While the title is clickbait, the post has some solid points.

Only testing in production is the one thing I would not recommend. A better solution is using a CI/CD pipeline or an acceptance environment, or both.

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

I'm still waiting for a project or company advocating test coverage in real life. Except for some formal quality gatekeeping pre-commit hooks and a bunch of professional QA coworkers, all that I kept hearing for decades was that developers tend too much towards perfectionism and we should rather be more pragmatic and accept shipping lower quality.

Of course it makes little sense to strive for 100% code coverage just to satisfy quantitative metrics, as that still does not guarantee 100% edge case coverage and it will waste time testing what was already tested in within a library or would have immediate effects on something already tested within another scenario. On the other hand, most development teams still don't test enough or not at all, and I'm afraid that many of those people will delight in reading a headline like this one. "I Stopped Writing Tests and My Code Got Better" might be true, but it can too easily be misunderstood to justify neglecting quality assurance.

Collapse
 
xwero profile image
david duymelinck

I think the clickbait titles go with the current attention span. If it is not spectacular you don't get views.
It is not a good evolution, but if you can use it to get good information out it is gaming the system, not?

Collapse
 
sahil_sahu profile image
Sahil Sahu

I am agree with you, I am just sharing my experience

Collapse
 
muhammad_haris_be6450c961 profile image
Muhammad Haris

plzz use my Ai Problem solve

Collapse
 
elanatframework profile image
Elanat Framework

A great comment: "Most tests are just checking if functions return what you told them to return. That's not testing, that's just... writing the same logic twice."

Thanks Sahil Sahu.

We have become too addicted to these unit tests.

Collapse
 
sahil_sahu profile image
Sahil Sahu

I've been through that from a very long time.

Collapse
 
andrewbaisden profile image
Andrew Baisden

Writing good tests instead of writing tests for the sake of writing tests is a good solution I find.

Collapse
 
sarahvarghese profile image
Sarah Varghese

True, Mny people dont test well, just run some written tests overall system architecture every components needs to be tested in overall enviroment

Collapse
 
inozem profile image
Sergey Inozemtsev

I actually aim for 100% coverage in my Python projects. It helps track breakpoints after any change. AI handles most of the unit tests.

Collapse
 
nihal_lakra_13d5a8bdbcc19 profile image
Nihal Lakra

Hey bro wanna connect just looking for friends who’s also in web dev

Collapse
 
sahil_sahu profile image
Sahil Sahu

sure

Some comments may only be visible to logged-in visitors. Sign in to view all comments.