Introduction
No skill is more important for programmers than the ability to write code that works. But all too often, developers don't t...
For further actions, you may consider blocking this person and/or reporting abuse
While I concur with most of what you've shared, and am an advocate of TDD myself, I am also aware of and I suspect you could have discussed a little, some of the reasons it's not used as often as it could be.
The biggest of them is TTM (if we're going to indulge in TLA's let's indulge - in this case Time To Market). There's an old adage that comes in many paraphrased forms, so I'll offer you mine off-the-cuff: If you go to market when you're ready, you're going to market far too late, though there are more famous versions.
Couple a desire/need/push to reach market ASAP there are pressures to streamline everything of course, and TDD is most definitely on that radar as are many other niceties.
That said, the emergence of better and better test frameworks is making it easier and easier to start with test designs to code towards.
A second impediment, that relates to the first, but touches on a competing fashion is the desire for rapid prototyping or agile methodologies if you prefer (for which no funky TLA leaps out at us). Where the goal is to drive towards an MVP (yay, we did squeeze one in - Minimal Viable Product) prioritising the UI and UX (2LAs of course can masquerade as TLAs give Two and Three boths start with T, and I hope these two are basic vocab, but in case not, User Interface and User Experience) deliveries to get rapid client or end user feedback on the directions of the UX. To wit we could argue UX design and agile methodologies compete for team attention with TDD.
None of which means you can't have TDD, as I said, I'm an advocate, and it certainly starts with having key advocates among the stakeholders who prioritise investments (as what a team does is an investment), and if the budgets are generous enough there is room for UX design and TDD, and agile and a focus on TTM ...
The benefits of TDD of course are clear and that you touch on. Reducing the cost of bugs (finding, fixing and to reputation and goodwill if they reach market) producing a net saving ...
When TDD impact business timelines, it most definitely should not be done.
Some say that TDD is no slower because of the safety net it provides, and you don't have to fix bugs later.
To that, I always ask "how does TDD save me from bad requirements?"
I would also add, regardless of language, senior developers are usually writing testable code by default - otherwise they're probably not all that senior. So in their case, writing tests after the fact doesn't tend to suffer from the refactoring cycle.
'writing tests after the fact' is not TDD. I've just come from a project with 97% test coverage mostly written after the fact. They were nearly all full-stack tests (using a browser and Selenium) and the code underneath was a pile of garbage with many defects. It took the team 2 years to wrangle the (pretty small) code-base to a point of sanity - there was massive copy-paste, violation of 3NF in the database schema, hard-rolled forms not using the platform libraries, no database foreign keys. TDD is a design technique infoq.com/articles/test-driven-des... so writing tests second is something else entirely, with output that looks superficially similar.
Strawman detected!
"'writing tests after the fact' is not TDD."
I never argued to the contrary.
Re your project with 97% test coverage, that sounds like a legacy project, with all the trappings that come with legacy projects, and someone that devised a test strategy that wasn't effective (we can get into the debate of "how to effectively measure test coverage" debate if you really like...).
Also, thanks for the links, but I don't need an education in what the concepts are... my comments are more aimed at figuring out when and why people use different strategies, so that I might be able to tweak our approach to solving problems (be those in the code, or in the rest of the business). I manage a team of developers, working on in excess of 50 software projects spread across 5 different geographical locales.
TDD, BDD, Scrum, Agile, SRP, 3NF etc etc, are all perfectly valid principles & ideas. The key, is in knowing when to use each, and why, and when to avoid (or modify) other people's ideas.
So when you don't use each of these things, what do you do instead? I see lots of detractors for TDD (including 'TDD is only for juniors') without anyone suggesting an alternative strategy
Did I say "TDD is only for juniors" ? Nope... didn't think so.
It's a little difficult to state what techniques (plural) we use whenever TDD isn't appropriate, because the technique we use depends upon a lot of context. Sometimes, hell, even often TDD is appropriate, so we use that.
"Appropriate tool for the job" - isn't just limited to languages & frameworks, working methodologies too. Much the same as some of our projects are waterfall, some Scrum, some LEAN, etc etc.
If you're aware of TDD, and maybe even like it, chances are you're aware of the alternatives.
I'm actually a strong advocate for TCR, but I'm also a realist and accept that it isn't always applicable, and the "test" part of TCR doesn't have to be unit tests. Also, we use gitflow, so I don't care that every commit is tested, but every merge is - one way or another.
My big issue with talking in absolutes, is that none of this debate proves that any code written, by any method, is 100% bug free or is 100% what the end users wanted/needed. Of course, I also acknowledge that 100% perfection is a myth.
I guess, ultimately, we follow ADD more than anything else, but even then, we follow our version of it.
Writing a test first converts the requirements into concrete language. If you can't write the test, it shows that you need to go back to your Product Owner and get clarification - it's actually superb at saving you from bad requirements. Otherwise how do you write the code? Waiting until QA (or even later) to find a requirements defect always costs more (its called the hockey-stock cost-to-change curve) deepsource.io/blog/exponential-cos...
My problem here, is that you're talking in absolutes.
"If you can't write the test, it shows that you need to go back to your Product Owner..."
Does it though? Maybe you're just a junior and need guidance from others on the team? There's a myriad of other reasons I can think of that tests might not be possible/easy/time efficient (esp. when writing a POC, for that last one).
"...it's actually superb at saving you from bad requirements."
Superb? Perhaps. But will TDD always catch a bad/incomplete requirement? No. You could always be following TDD and SRP religiously, write a perfectly valid test, but break some other area of the system (and if that other area isn't covered by tests, regardless of when the tests were written, we have regression...).
Do you think you, or someone else at your work has a complete and true, and always up to date picture in their head about how the system functions, and how users interact with it? At least the talk I saw from Dave Hahn was very clear to point out that NO-ONE at Netflix holds the full system in their head.
I don't think I ever argued for "QA (or even later) to find a requirements defect..." Did I?
I totally agree with this, I've experienced both sides. Having complete freedom and time, so of cource TDD is used. I also had and have many projects where the main priority is working fast, and tests are left aside.
Though in real production products, it's a freaking life saver to have tests
I'm personally much faster writing code with tests, and that's why I always do it. You have to test (right?) so how do you do it? Manually? and manual QA? That sounds like a way to not go fast. And unless your system is trivial, over time you will be introducing defects, you can't refactor/rewrite safely without introducing more defects, and a 'big ball of mud' ensues.
I worked for a startup which did TDD right from the start. We launched on time, and has one (yes one) defect on launch day. It's always faster to get to market with tidy code which can be refactored, bug fixed etc with confidence - no one wants to get to market first with a product that doesn't work (that's a good way to kill a startup) unless of course you're in that weird world (hello social media) where your users don't care if the software doesn't work very well or fails occasionally. I've yet to work in that space after 30 years.
Coding is not about piling bugs, and TDD is not about fixing bugs that are not coded yet either. TDD is about defining requirements before implementing them.
In many cases, requirements are blurry, and in such situation TDD is a complete waste of time. Tests are also a way to ensure something that already works will continue to work the way it should. It's way faster to make a test from existing code and behavior to lock it as a requirement, rather than imagining and locking requirements for something that yourself are unsure on what behavior should be expected.
TDD is a tool and like any tools, it should be used when needed. Using it like an ideology might cost a lot in time and efforts when it's counter-productive.
What you are describing above is almost a straight quote from Michael Feathers' 'Working with Legacy Code' where you take a piece of code with uncertain behaviors, and write tests for it to capture those behaviors before you start modifying it. That's a bit like we used to be in (bad) waterfall days - write code, throw it over fence to QA, get them to test it, find defects, report back - a complete waste of eveyones time.
So how can we identify when its counter-productive? What I mostly encounter is 'a mess with no tests' or 'a mess with tests' neither of which emerge from TDD. Mostly I see people not using it out of (I assume) ignorance. or reading comments like this.
One important step I think is missing here is to make sure the test fails before writing any code, I often find myself writing tests that pass before implementing the feature, these tests have no real value and may even be counterproductive as you may end up with an incorrect implementation while thinking it's fine
Nice point, many people forget about this, if a tests works first time I'm always worried and try to break the code to see if it's really working
Despite claims, TDD is not a silver bullet. And just like any technique it works for some people but not for all. It heavily depends on way of thinking, habits and experience.
By the way: functional programming approaches are far more efficient than TDD in regards to reducing number of bugs and number of necessary tests. Where TDD reduces bugs by half, FP reduces them by at least an order of magnitude.
Could you post some links to evidence that FP reduces programming errors by so much please?
I'm speaking from my own experience. Couple years ago I've switched to FP-inspired style which leverages monads to represent "special states" (missing value, value or error, not-yet-available value). This allowed me to eliminate whole classes of errors from my code and heavily reduced number of necessary tests. In vast majority of cases if code compiles - it works and tests are necessary only to check business-level requirements (for example, input value constraints).
You may find more details in my blog here, at dev.to.