If you are interested in reading this article in Spanish, check out my blog The Developer's Dungeon
Until 2 years ago I had never written a single unit test in a professional environment, yes you heard that right, I am ashamed to admit it but in the first 3 years of my career no one had ever asked me to write a test and I couldn't care less.
I didn't knew how they worked and much less understood the benefits they could bring, sure I knew the theory and knew from blogs that they could be useful but genuinely I did not consider them important.
Then I read "Clean Code" and "The Clean Coder"[Robert C. Martin] and guilt took over my body. I realized that during my career I didn't present a single repeatable proof that my software was working, it was indeed functioning out of pure luck and only tested by me in very specific situations.
I also realized how irresponsible I was for delivering products like this and that I was not only prejudicating the users of my software but the craft I so deeply love.
Suddenly I remembered many times bugs would come up over and over again from parts of the code that were already "finished", how a change in an unrelated piece of software could cause tons of issues that were already manually tested and "fixed".
I also understood that what I felt before doing a refactoring was fear, deep fear because of the errors I could bring if I would open that door.
It was at that moment when I decided to take a stand and make a change for myself, just like Michael Feathers says "Legacy code is code without tests" I committed that any code that I would produce would not be legacy, therefore will always have tests.
I understood that development is not finished until your code has a suite of tests to prove it right, or at least not wrong.
Since then, after making my code work I would proceed to write tons and tons of unit tests, trying to cover as much as possible when a bug appeared I would fix it and then write a unit test for it, and while my code and self-esteem as a developer got better my fear did not go away.
In fact, every time I had to refactor something, that fear would creep up on me again, having a big suite of unit tests would not prevent it, what is worst is that when a bug was found and there was a unit test that should already catch that, but it was passing for a different reason, my confidence on that test suite would dramatically drop. Many times I went home questioning if all that work doing unit tests was even worth it or I was just fooling myself.
Then I read "Test Driven Development By Example" [Kent Beck] and a new possibility came knocking on my door. Maybe the order of the development process did matter and doing things "Backwards" could have some benefits?, I decided to give it a try.
A little context:
TDD is a practice in which you let your tests guide the design of your solution, following this process will make your code Decoupled and Testable.
Usually, when we want to develop a new feature we immediately get into the code, start typing like crazy and come up with a very crappy but functioning solution, then we write a unit test and call it a day. With TDD this process would be backwards.
- Ask yourself, what should your software do? in which situations it might not work? then you proceed to write your first test to prove your assumptions, this test will obviously fail since there is no implementation yet.
- Then you need to do the smallest and easiest thing possible to make your test pass, just focus on making that light go to Green.
- Finally, when the test is passing, you refactor your implementation and fix all the "bad things" you allowed to happen in the previous step, but always making sure that light is still green.
This cycle is called Red Green Refactor.
Now that we understand the basics let's go over the reasons why I fell in love with TDD:
Love at first sight
By following this path you always start in a stage where your code is not working but you make it work, this presents the first reason I love TDD, CONFIDENCE
When you first see your code and test fail but then after some changes you make it pass, you get an instant gratification that fuels up your confidence and trust about your implementation, it is a sutil change until you are facing that mighty refactoring, then you notice that you are not afraid anymore, if you break something, your tests are there guarding your back.
Second Date
Usually, when you have to develop a new feature, you have to touch existing code, you change a few things here and there, but then you end up in a place no developer wants to be, the code is not working anymore and you did so many changes you are not sure which one broke it, you spend a lot of time trying to find that change or sometimes you roll back all your changes and start again from zero.
While facing this situation I encountered another reason for loving TDD, LOW RISK.
During TDD you are in a very short and fast-paced cycle of Red Green Refactor, every run introduces new changes in a very incremental fashion since the code was working a few seconds ago, it is trivial to find the changes that caused the light to be Red and fix them so we can see our precious Greenlight again.
Third Date
Many times when I was going through that phase of writing the tests after the code was done, I encountered an error that would make my whole test suite crumble, I noticed it was because the foundations of that suite were not stable, to begin with.
When you write tests after you wrote the implementation it is very common that unconsciously your tests only happen to take the "Happy path", those places in your code were everything is green, nice and new, you avoid test cases that are hard to test, simply because your code was not built to be tested in the first place.
This situation showed me the way to the third reason, NO FEAR OF EXTREMES.
TDD forces you to think about the tests first, with this mindset you can fully concentrate on the edge cases, the things would normally break your software are gonna be magically covered, you will find a new type of fun in finding those edge cases and make sure they first go Red but in the end, they go Green.
Moving in Together
Lastly, and this one is related to the last one. When I first started doing tests my job as a software developer become one of the most boring and tedious tasks in the world, I cursed the moment I made myself responsible for not writing them, this boredom provoked that I would avoid the paths that were very hard to test. But here It came TDD to rescue me, by making the tests first and see them fail and then fixing them I developed an addiction, but don't worry is a good one, I felt addicted to finding errors in my code and fixing them, knowing with certainty that those bugs would not show up again in my code, or at least not in the same way, was the reason that closed the deal, basically TDD MADE UNIT TESTING FUN.
Since then I can't imagine my life as a developer without writing unit tests, sometimes I even write tests in order to reproduce bugs instead of trying them manually.
This addiction I mentioned even moved me to create a VSCODE extension for running Angular unit tests easily in order to follow the fast cycle of Test Driven Development.
I really hope that after reading this article you will find interest in giving TDD an opportunity. Not gonna lie, it is a hard path, I still have to remind myself all the time not to go off the plan and keep doing implementation when I should be writing the next test, but with practice, this road becomes much easier and the benefits much more clear.
What do you say, are you ready to take this road with me?
Top comments (33)
My issue with TDD is the time component, and PMO always have immovable deadlines...
The typical workflow is that you sit in a planning meeting, talk about the tasks, figure them out & put story point estimates on. Back at your desk, you pick up the first task in the TODO column & recheck the requirements. Work begins.
While working on the task, previously unknown parts of it start to get clarity. The complexity of the task changes (but the estimate never does). At some point, a BA comes over with "oh by the way, they don't want a Ferrari, they want an airliner, you can do that in the same timeline, right?" So we have to refactor everything.
If I used TDD, I'd have to refactor both the tests and the code that fulfils the requirements. If I write unit tests as I go, I have to refactor the code and the tests. If I write the tests last, I only have to refactor the code I've been writing.
Tests are the first thing that PMO sacrifices for the timeline, because they're only seen by developers, and we pay QA for testing anyway, right? So why are we paying devs to test too?
Refactoring more things takes more time, especially when requirements exist in a shifting sands environment.
Developing without tests will save you time in the short term, no question.
However, having a test suite will save you time in the long run, because it can catch bugs immediately.
In addition, it will let you ship with confidence. You can literally proof that the tests workflows work.
To me, this psychological effect is even more important and let's me sleep at night ;)
Nowhere in my post did I say that products ship untested.
The debate is about either writing tests before code, or code before the tests (TDD or not).
Hey Dave, sorry maybe I misunderstood, then what did you mean by this
"Tests are the first thing that PMO sacrifices for the timeline, because they're only seen by developers, and we pay QA for testing anyway, right?"
Here I understood that in some cases PMO decides not to test in order to meet the deadlines, is my understanding of your answer incorrect?
Dave, fair enough, maybe I misunderstood your post.
However, from my (and my team's) experience, writing the production code before the tests will often lead to code that is simply hard to test.
I don't follow TDD strictly, but I certainly see it's value, especially for bug fixes.
No argument about TDD for bug fixes - you get clear, unchanging requirements in a bug fix. I personally follow TDD when fixing bugs.
Re the untestable code - I think that's something that just takes time. We have a few guys who write then have to refactor to write unit tests.
But (in our case) you can hit the minimum coverage targets by writing integration tests, and somewhat disregard good OO design principles and code patterns, and it'll still pass a build (though we tend to complain loudly in code review about future maintenance, and usually a PM interjects with "raise a tech debt ticket, we'll fix that in future, I promise" 🤔)
PMO doesn't decide anything (our PMs can barely decide what they have for breakfast).
However, it's a fact of life that because the requirements change mid sprint, the estimates are invalid without the expectation of delivery changing. This usually ends up in a negotiation, where the PM will flex the deadline a little and the Dev team have to flex a little from the ideal perfect world.
Specifically in my office, the CI server will throw an "amber warning" if code coverage is below 80% and we all try to keep it green. But if we have a tight deadline, we can sacrifice 20% test coverage and still release (the CI server won't let anyone release anything below 60%).
So, given the requirements change (other than bug fix), and deadlines are always tight, test coverage is an easy sacrifice - though that doesn't mean we don't write tests.
I'm talking from real world experience... so I don't think that calling me a developer who doesn't care or know better is particularly helpful here.
Take bug fixing for example. Someone finds a bug & raises a ticket. First step is to confirm how the system SHOULD work in that scenario. Oftentimes that's obvious, sometimes it needs a chat with a BA/stakeholder.
That done, I write a test for how the system SHOULD work, and the test clearly fails. Then I fix the bug, and the test passes.... TDD in action.
New feature development is completely different. Spike research is another area that would be impossible for TDD.
As a developer, I try to always use the appropriate tools at the appropriate time. TDD has its uses, but it isn't a panacea.
I really need to check my words before answering as again I didn't not meant anything I said as an attack onto you.
Don't worry, and please, don't delete anything on my account (I already see deleted messages in the thread).
Given your name, and the content of the TDD article, it's clear that while your English is good, it's not your first language (I'd guess you're Italian, and while I get by in written & spoken Italian, trust me, your English is better than my Italian).
In the spirit that I believe Dev.to was started, the debate is a good one to have - and with any good debate, there will be differing opinions.
Argentinian, but yeah Italian parent ahah. Yeah also I am not into deleting conversation but that message didn't gave anything to the argument, I felt it was just noise since you answered that in other answers.
Well, I have a few strong opinions here. first, why the PMO is deciding if you write tests or not? I have seen many developers that feel they are under the PMO, you as a developer are providing with the technical expertise, is your call to decide how you build the software, and tests are part of the development process, you wouldn't say "Ok skip design and estimate and just start developing" right?. Developers need to get their big boy pants on and start taking responsibility for their choices. Second, if you are working and the requirements change, while you refactor you continue doing unit test, the reality is that not all your tests are gonna be thrown away, most of them will still apply and having them it will make that refactoring bullet proof, which is what you need. Third, if the timeline doesn't match, cut features, deliver less but better, believe me no PMO will argue with you if you say that you re gonna deliver less because the timeline is not realistic but everything is gonna work very well. I am working in a company that 3 years ago we were as the example you describe, it tooks as 3 years but now we have more power, we decide if it's possible, we decide the reach of the release based on the deadlines, we decide that everything should be tested, but it was our will and effort what made that change.
One last thing, it was also very important making people understand that estimates are just that an estimate, they re not perfect, their definition is that you re "guessing" how long it will take, if you start with the task and realize it is gonna be more time, that is fine, comunÃcate with your team and re plan.
All lovely sentiments. I wish I lived on the same planet.
Re the size of my pants - I refuse to release anything to QA without a minimum of 60% test coverage (I won't get into the intricate debate here of how to measure). Doesn't matter if I wrote it or someone else on my team wrote it - no minimum coverage, no release.
PMO often push for a full feature set, on the timeline they demand. Same in every company I've worked for. So I made a point of having the CI server enforce code coverage, so no-one can push an untested release.
Testing, or not testing, wasn't the debate (nor were the size of my pants).
The point, is that TDD requires perfect requirements (or a locked sprint) otherwise you'll be wasting time by writing tests at the start, because the requirements will change.
We have someone on my team who loves TDD, advocates for it at every step, and then takes 5 times as long as everyone else to actually deliver.
I'm not speaking from an untested world here. TDD has only two valid use cases; bug fixing and some dystopian perfect realm where requirements are written by a BA that understands the user base perfectly.
Hey Dave thank you for answering back, I am sorry if my answer seemed attacking to you in any sense 'cause that was not the idea at all. I wasn't talking about you specifically, me and a lot of developers I know sometimes we don't push stuff when it is our responsibility, we let other parts of the development team decide for us when building the product is done with our decisions.
I also worked in multiple companies were scenarios happen as the way you describe them, it's not until my last experience that I understood that we can all work together and improve that situation.
On the part of TDD I don't agree that you need perfect requirements, I work in an environment that requirements change all the time, and features never end up as they were originally planned, still I have not seen TDD come into trouble with this, in the long term we had produce much better code when having a testing suite written in a TDD manner ( due to having a high test coverage and covering very weird edge cases), it is also very easy to change requirements and refactor when you have this already implemented, changes are done with ease because you have a clear view if you broke previous features.
About the time it takes to develop, after the initial time, do you have any knowledge that the code the guys without TDD produced was much safer or even equally safer than the one produced with TDD?
I can only speak from my experience but when consciously using TDD in specific modules of our application the bugs that came from it drastically reduced due to the benefits I mentioned previously.
In my case I don't advocate that everyone should do TDD, I am just mentioning the reasons why I love TDD and why I do it. It happens that my team does TDD, but other teams in the company don't.
Re the general productivity of the TDD advocate on my team - pretty much everything they touch is a bug riddled mess (functional issues). It's so bad that most of the PMO won't even give them work to do.
But, see my post on code reviews... same person, and I'm confident it says more about that individual than it does TDD.
Re code coverage, you're entirely right, if we release at 60%, 40% is not covered by tests. But that doesn't mean we haven't manually tested those areas, or that we haven't given the QA team a heads up on where we are weak with tests...
My 2 cents.
Although what Pat says, sounds more appropriate theoretically, but I have to side with Dave here, for he says the brutal truth.
TDD can be a great practice when you are, may be, building your own small-mini-side project at home.
But when you work for an enterprise, you cannot practically do 100% TDD for any real business deliverables.
You should write tests to test a piece of logic, both positive & negative, if it makes sense to your business purpose.
With so many frameworks around these days, there is so much boiler-plate code, that it does not make any sense to follow a true TDD approach.
Also, in enterprise level apps, more often than not, there is no need to test all 100% plausible and implausible scenarios. Mostly, you know there are N number of known scenarios you can write tests for those, both positive as well as well negative, and then encompass all of the rest inconsequential scenarios in 1 single bucket of failure or the bucket of "do not care".
With so much debate going on about true-blooded TDD and all, but what I feel is that it is more important to understand the business value of what we are doing, and use our practical experience to apply the correct logic in solving that problem.
There is no point in solving a non-problem which nobody cares & keep on writing innumerable test cases. I mean, practically speaking, u cant possibly think of ALL the different scenarios for a given situation. Even if someone does, I am sure someone else can come up with one more. So, let us not go down the rabbit hole.
Rather put our energy in identifying what matters to the business and solve that, then move on to the next problem.
Unit tests are not optional, unit tests also make programming so much more enjoyably. Hacking with confidence, not by coincidence
Indeed, it took me years to figure that out though 😔
Yes I feel exactly the same, luckily that is were I am now.
Great article Patricio! I've been on a very similar journey (never having written a single test in the first few years of my career). One of my goals for next year is to fully embrace TDD. Although I know I should use it, my usage is still sporadic.
That said, the one service I have written recently in which I've fully embraced it is incredible. The service validates a set of data points, and currently there are 150+ different combinations and outcomes. I can refactor with confidence because I know all 150+ cases are coupled with a specific unit test.
Yeah it's hard, I have been focusing on doing it and I still struggle, but as you said, when you start noticing the benefits in things you made is when you get hooked
Loved the analogy with dating :)
Also 'legacy code is code without tests' felt very insightful.
Thank you for the post Patricio!
I am glad you like it, indeed is a very good one, I recommend reading Michael feathers book, "working effectively with legacy code", is game changing.
I'm not against TDD by any means, but your initial issue - tests passing for different reason - can often be prevented by having tests you expect to fail eg test: expected to fail: assert add(2+2) = 5 -if this passes, there's something wrong with your add function. This is also useful for confirming you're using a test database rather than production, are accessing mocked elements rather than an internet resource, etc, and in general grouping your tests into things that should and shouldn't happen (which is a helpful paradigm for me), and a gentle nudge that some of your tests might not be testing what you expect (gentle in that an unexpected pass might not fail a build).
There are way to many programmers and not enough engineers. Software should be built the same way one builds a bridge. You would never build a bridge using tdd so why build software that way?
I agree with what Daniel said in some sense, the fact that we re doing engineering doesn't mean that we need to do stuff the same way. Also, when you build a bridge you usually build prototypes to calculate stress conditions and time pressure against natural elements, all before laying the first brick. In this regard I also think developing software is more similar to accounting in the way we backrest our assumptions.
Have you ever got a requirement to reduce the size of one of the pillar in a bridge after it is constructed?
Because developing software is completely different?
Analogies can only help you so much...
Wasn't it time consuming? For me it took 20~ 40 % extra time and also it's hard to get used to 🥴
You also save time for future big fixes. In my experience, it's well worth the time.
Why did it took more time to do the test first than last?
If you are comparing test to not test sure it takes longer but the benefits are clear.