DEV Community

Marcio Frayze
Marcio Frayze

Posted on • Edited on

How I started practicing TDD

In this article I assume you have already read the basics about test-driven development. Here, I describe some of the problems I faced when trying to internalize this practice in my daily life as a developer and how I managed to get around them.

First steps

I discovered TDD when I was studying computer science around 2003. I was reading about Extreme Programming and loved the ideas, but had a lot of difficulty practicing some of them, especialy the test first approach (also known as test-driven development). Took many years until I internalized it, but today I can not understand how I was able to develop software without this practice!

The purpose of this article is to try to help people who have an interest in TDD, but have not yet been able to include this practice in their daily lives.

Study concepts before libraries and frameworks

When I decided to apply TDD, my team already used jUnit for automated testing, but the tests were done after implementing the production code. As I already knew the tools and had a notion of the basic concepts behind TDD, it was natural to try to go straight to practice without first studying the theory further. And the result couldn't be another: frustration. Changing the way I worked for over 10 years was very hard and I gave up several times. I read articles on the Internet and the theory seemed interesting, but I couldn't understand how to practice.

At some point I realized that I needed to devote more time to the theory and two books helped me a lot to clarify several doubts. They are:

So if you're determined to practice TDD, I suggest you start by reading these books. The first is quite short, objective and explains very well the fundamentals. The second is more extensive, discuses TDD in depth and helped me to finally understand how to practice TDD.

Developing test-driven software seems to be very easy, and can be summarized in just three simple steps:

  1. Write an automated test that fails;
  2. Write enough production code for this test to pass;
  3. Refactor the code and repeat the loop.

tdd cycle

As this theory seems to be simple, it is tempting to neglect the study of concepts and go straight to practice. Don't make that mistake. Study the principles and fundamentals further and everything will be much easier. Learn from those who have explored the theme much more and paved the way for us.

My recommendation is that you read at least the first book I mentioned above. You will spend only a few hours and it will save you many, many days of frustration. And as soon as you have more time, you should also read the second book. It's hard to describe how much I learned from it! It has definitely completely changed the way I develop software.

ALWAYS write the tests first

In the beginning I had great difficulty understanding the importance of obeying the order of the TDD cycle. It was common to code business logic and then suffer trying to write the tests. I knew I wasn't practicing TDD, but I thought the end result would be the same, after all, I was writing the automated tests, just not following the order established by TDD. It took me a while to understand that I wasn't letting the tests guide the design of my code and how much I was missing out.

And when I was finally writting the tests, it felt like fighting with the code and I had the feeling that I was introducing bad practices in it. I did things I knew were wrong, but they seemed like the only option! A recurring example: How to test a particular piece of code in isolation if it was a private method? Should I change the method I want to test from private to protect or public? And I ended up doing exactly this (wrongly) a few times. I always felt guilty and realized something was wrong, but ended up putting the blame on the test. I thought doing those architectural deviations was the only way to make that code testable.

But as I developed the software in this way, the test coverage dropped and the internal quality of the software fell along with it. And as the architecture of the system became less and less testable, it turned into a snowball until it got to the point where it became very difficult to create new tests. In the end it became a very frustrating experience and I saw very little benefit in working this way, since the gains were not offsetting the cost.

I only understood this better when I read the book Test-Driven Development: By Example and started obeying the correct flow of TDD. But resisting the urge to write "just one more line of code" before there is a test is harder than it sounds. Practicing TDD requires discipline. Have patience and resist the temptation to advance the production code. Even if you're sure how the code will look in a few minutes, stop implementing the production code as soon as the test becomes green. Go to the refactoring step (duplicate code removal) and then go back to creating a new test. Create a test that fails and only then move on to the next task. You'll be surprised how many times the test will guide your architecture in a different direction than you had originally imagined.

After a while you will internalize this flow, but the first few times you will need to be very focused. This is usually easier when we are working with paired programming or mob programming, as other people will help remind us to avoid this common mistake.

Don't shoot the messenger

If it starts to get too hard to create a new test, you may have underestimated the refactoring step and it's a good time to analyze your architecture. Don't get mad at the test! On the contrary, be glad to have found a point that needs attention and that, if it were not for the test, there might be much bigger problems in the future. The sooner we find and fix the design flaws, the easier and cheaper it will be to repair these problems. And the practice of TDD is a great way to do this.

Don't use advanced testing tools

It is quite common to find beginners in TDD trying to use advanced tools for creating complex double objects (mock objects), using reflections and even trying to mock static methods or private methods. Don't do this! These tools may be useful in some very rare cases, but in 99.999% cases the problem lies in the system architecture and after fixing these flaws, we no longer need such invasive techniques. The better the architecture, the easier it is to create automated tests.

Learn software architecture and clean code

As I said just above, system architecture has a very significant importance in creating testable systems. So you must study about good programming practices. Learn how to implement clean code, understand SOLID principles, and how to create a good software architecture, and apply all of this to your code. Below I list some sources that helped me a lot in this trajectory:

  • Clean Code — A classic written by Robert Cecil Martin (aka. Uncle Bob). In it you will understand the fundamentals of how to write code that will be easier to maintain and refactor.
  • Clean Archictecture — From the same author of Clean Code. It will help you better understand what it is and how to create a good software architecture. I recorded a podcast (in Brazilian Portuguese) where I talk about this book.
  • Refactoring — Written by Martin Fowler, another software development classic. One of the main steps of TDD is the refactoring your code, and this book is the natural reference to understand how to do it in the best possible way.
  • Building Evolutionary Architectures — I also recorded a podcast about this book. This work will help you understand how to assemble an evolutionary architecture. You can also watch this talk by Rebecca Parsons, CTO of ThoughtWorks and one of the authors of this book.

Follow the principles of YAGNI and Last Responsible Moment

Another tip is to always follow the principle of you ain’t gonna need it, known by the abbreviation YAGNI. You should also follow The Last Responsible Moment principle.

By following these two principles, along with TDD, we were able to postpone decision-making until the last safe moment. So we can gather as much information as possible so that when we really need to make a decision, it is made as informed as possible.

“There are things we do not know we don't know.” — Neal Ford.

Even if you are sure that a particular database model will be the best solution, that a particular technology is the most appropriate for the problem, and that a certain architecture would be the most appropriate, resist the temptation to make this decision until the last responsible moment arrives.

There may be several factors that you still don't know that can change all decision-making. Perhaps at the beginning of the project a NoSQL database seems to be the best alternative, until a midway impediment appears that makes this database model no longer a good solution. If your team made the decision to use NoSQL prematurely, the cost and pain of switching to another solution will be much higher than if you had simply waited until the last responsible moment.

You can find more information about the concepts of last moment responsible and evolutionary architecture in this article by Neal Ford and Rebecca Parsons.

Run tests automatically when saving any file

The technique that helped me the most in the beginning and continues to help is to configure the build tool so that it runs all tests automatically whenever a file is saved.

I leave a window on my secondary monitor that keeps running the tests at all times, while I keep the IDE or editor on the primary monitor. When I'm working with only one monitor, I leave an open terminal at the bottom of the screen running the tests. In this way, I have two great advantages:

  • I don't forget to run the tests often;
  • I get a notification right after a change causes some unwanted side effect that breaks any test.

If you are programming in Java, gradle has a parameter that does this for you. If you are using any other build tool, look in the documentation and you should find something related to the term watch. When I'm programming on Elm, I use the elm-test in conjunction with the watch parameter, for example. In some cases the entr tool may help in this task.

I strongly recommend that you start this way. Of course, on systems with many tests you will need to adapt this model, since the execution of all tests can be very time consuming and require a lot of resources (memory and CPU).

To this day I still use this technique and could not count the number of times this helped identify problems in advance and saved a lot of time.

It is not enough to automate the execution of the tests

Some developers say they are implementing unit tests simply because they are using some library (such as JUnit or NUnit) to make it easier to perform tasks. But in practice, what they're doing is just making it easier to execute a function or method, like it's a main function. This cannot be called automated testing, let alone unit testing.

I worked on projects that had a supposedly high coverage of tests where the team even used analysis tools such as SonarQube, but early on would appear some strange things like tests that had no assert! The only way the test would fail was if an exception occurred during its execution, otherwise it always ended successfully. Another common mistake was tests printing things on the console (usually the result of failure). In short: they were great examples of how to not do automated testing.

Asserts are a fundamental part of testing. It is at this point that we validate the outcome of its execution. When it is not done properly, we run the risk of creating false positives, false negatives or even very fragile tests. So don't be in a hurry when writing the asserts. They can't not be too generic, but they can't be too fragile either.

And never print anything on the console. It is asserts' duty to indicate whether the test ran successfully or not, and in case of failures, the test framework itself will take care of showing where the problem occurred. Therefore, it's very important to give a meaningful names to the tests. I use and recommend the Given-When-Then standard.

Prioritize unit tests

tests pyramid

We can separate automated tests into three categories: end-to-end, integrated and unit test. Ideally a team that practices TDD should create these 3 types of tests. The most common question is to whether it is not enough to create the end-to-end tests and test everything at once. But each type of test has a different goal and I recommend that you start with the unit ones.

I will not get too deep into this topic, but I would like to register here an alert: do not prioritize end-to-end testing. I lost count of how many times I saw teams that focused only on end-to-end testing and after a few months, when the suite was starting to get too big (and with very fragile and slow tests), they abandoned the tests. If you find yourself in this situation, I recommend reading this article written by Ham Vocke. There he explains this question very well.

The test should guide the development

When I first met the expression "test-guided software development", I ended up putting a lot of emphasis on the idea that I would be testing my software in an automated way and little attention on the part of guiding the development.

And test-guided software development (TDD) is, as the name implies, about software development. Don't forget that and let the tests guide you. Even if sometimes it seems like you're going the opposite way to what your gut is telling you, give the tests a chance and realize where they're trying to take you. It's not uncommon to end up on a much shorter, simpler and clearer path than what you were originally thinking.

Don't give up on the first obstacle

TDD is a practice and every practice requires training, commitment and a dose of perseverance. Like learning a new musical instrument or cycling, it is necessary study and dedication. Sometimes I fall off the bike or miss the drum's beat time in rehearsal with my band. And the first few times I tried to apply TDD I had a hard time finding the best name for the tests, didn't know how to implement the asserts and it took me a while to find out how small the scope of the test should be, among many other difficulties.

I would like to say that eventually I found the best way to do all this, but the truth is that just like as learning to play an instrument, my Practice of TDD becomes better as I train, and every day I find some better way to apply it. That's why I keep experimenting, making mistakes and trying again. And of course, studying! It's not enough to just practice. Progress is much faster when we learn from people who have been practicing much longer than we have. So my suggestion is, at first, to go for excess. Follow the TDD by the book and only write a new line of code if it is covered by some previously written test (whether unit test, integrated or end-to-end).

Probably the hardest part of achieving high coverage is infrastructure-related layers (such as methods responsible for accessing the database). It is possible (and recommended) to create tests for these parts of the code, but they may require a little more in-depth knowledge to create these automated tests. To implement them, your architecture must have a reasonable level of maturity. If your team knows how to apply SOLID principles (in particular, the dependency inversion principle), these tests are already much easier to implement. If you're having a hard time increasing code coverage of certain layers of your system, try first at least to make sure that the business logic are fully covered.

There are tools that generate test coverage reports, even integrated with IDEs. These reports display which lines have been tested, percentage coverage of each class and package of your system, among other information that can help you identify places that need more attention. Just be careful not to hold on too much to the reports! Use sparingly and responsibly. Never use as a management metric. Brian Marick wrote this article in which he explains in detail the care needed to use metrics in a healthy way.

The definition of legacy code

There are several ways to define what legacy code is. Before meeting TDD, I interpreted it as being synonymous with old, outdated code, that had been developed with technologies that were no longer used in new systems. But Feathers has a slightly different definition of this:

"To me, legacy code is simply code without tests." - Michael Feathers, Working Effectively with Legacy Code.

Knowing this definition has made an immediate impact on me. The code I had written five minutes before reading this definition had no tests. I had just implemented a legacy code! And, at least for me, this definition makes a lot of sense. And it's very easy to sort out whether the code in front of me is legacy or not. I just have to answer the question: do this code have automated tests?

For your sake, for the mental health of your colleagues and for the sake of the product, let's stop writing legacy code.

Don't be afraid to revert the code when you're having difficulties

I used to spend a lot of time trying to figure out why my code wasn't working. Used to write several lines of code, executed it, and the result wasn't what I expected. So I used to changed it without much feedback, re-execute it, ... and this vicious cycle repeated. It was common to spend a lot of time (hours or in extreme cases even days) in a situation where the software was in an inconsistent state, not behaving as expected. There were a lot of sledgehammers until I could mold it for something close to what I wanted.

TDD came to change this routine completely. Applying test-guided development, continuous integration, and other Extreme Programming practices, when I feel like I'm having a hard time compiling the project or implementing the code to make a test pass, my reaction is to drop all changes and go back to the previous state (when all the tests passed) and then write a test with a smaller scope and continue the flow.

Having a lot of difficulty solving the problem you are facing is a sign that the test you wrote is too broad and you are taking a step bigger than your leg. In this situation, it is best to take a step back. Go back to the state where all the tests were passing (discarding all your changes) and write a test with a more limited scope, and then follow the TDD cycle once again. If you still feel you are not managing to evolve, go back again and try to take an even smaller step until you can move forward safely (with the tests passing). Then refactor your code and write the next test to take another step towards your goal.

The moment I decided to follow TDD

As I said, I learned about this practice when I was still in college, but it was only after more than a decade after graduation that I began to practice it.

I was on a project where I had the opportunity to develop a very isolated part. It was almost like a new project, as it depended on almost nothing from the rest of the code base. I decided I'd let the tests guide the development of this part.

I had very few difficulties and everything was fine, until I had to use a litte more complex double object. At this point I had already read the two books I indicated above (Test-Driven Development, by Example and Growing Object-Oriented Software, Guided by Tests ). I could understand the theory, but it took me a while to come to the conclusion that what I needed in that situation was a spy.

The system had some problematic integrations and we decided to create a decorator that would be used around these calls to monitor them. If a certain problem was found, an alert would be issued. It was basically a simple solution to monitor integrations.

In a first iteration, it would only print some information in the system log. The temptation not to test this one line of code that would issue the alert was great, I confess! But I had committed to letting the tests guide me... and the tests were telling me that simply putting a "System.out.println" in the middle of the code was not a good idea. Against my own will, I decided to follow the tests. I created an interface to specify the entity that would emit the alert and for the tests I created a spy from this interface. Thus, using the idea of dependency inversion, decreases the coupling of my code. Now the business rule would no longer be coupled with the concrete implementation of alert emitter and it would no longer need to know how the alert would be issued, as it would be coupled only with the interface. But of course before all this, I created a unit test that helped me find this solution.

A positive side effect of this approach was that in the future, when instead of simply printing on the console we decided to use a more complex mechanism (such as posting in a message queue, for example), I would just need to inject another concrete implementation of this interface and would not need to change any lines of my business code.

Until this moment everything was fine and there was little left for us to deploy this functionality in production. But a few days later I had to change the logic of when the alert would be issued. I followed the TDD ritual: I changed the test for that functionality (which started to fail), corrected the implementation (making the test green), and then refactored the code.

But suddenly one of the tests failed. I was sure that my changes had not introduced any errors in the code. And the certainty was such that my first reaction was to think that the test was wrong. I opened the test class and looked at the git history. No recent changes. I reopened the code I was refactoring and rolled back to an earlier version and the test passed. At this moment I admitted to myself that I had made a mistake during the refactoring. Made a diff between the new code and the versioned and bingo! I had unintentionally removed a line. Without that line, the code kept compiling and running normally, but it would never issue any alerts!

I would never detect this problem in the development or staging environment and maybe not even in production. Since it was something that should be sporadic, we would not realize it's absence. And this would hide a lot of problems that were occurring in production and that we only detected thanks and this alert. At this moment I realized the amount of time and headache that this test I had hesitated to do had spared me. And felt in the skin the advantages of following the development guided by tests.

It's not normal to be afraid

Are you afraid to publish something in production? Or change parts of the system? In some companies this fear is already part of the routine. But this cannot be considered normal. It's a symptom of a problem and needs to be treated. Never ignore your or any member of your team fears.

If you are not sure your change will work or you are afraid it might impact another pre-existing functionality, the most effective way to solve this is through test-guided development.

"Tests are the philosopher's stone of the programmer, transmuting fear into boredom", Test Driven Development, By Example, Kent Beck.

In projects that do not have any kind of automation or testing, it may seem very difficult to get to this stage. And on some larger systems, it can be tricky. But taking it one step at a time, one day your team gets there. If this is your current scenario, Michael Feathers' Working Effectively with Legacy Code may help you find good ways to improve your code.

Less debug, more automated testing

As I started developing my test-guided software, I began to notice that the number of times I use debug through the IDE has decreased dramatically. Rather than wasting time trying to assemble a mass of tests to run the application manually, finding the best point to add a break point and closely track the execution of each step of the application, it became much easier and natural to create an automated testing scenario. Whether it's unit tests, integrated testing, or end-to-end testing, the vast majority of the time I can create a test that reproduces the scenario I need to analyze.

Using a debugger is a task that (at least for me) is quite stressful. To simulate the error only once, I had to generate the data by running scripts in my database or even changing the data of systems that I integrate with, just to be able to create one scenario. Once that's done, if the break point was in the wrong place or pressed the key equivalent to step over when I should have done a step in, it would destroy the data and I had to start all over again.

Besides being a time-consuming task, it is quite delicate, which generates stress. Who has never let out a swear word when, after a lot of work, the IDE goes through the break point line when we expect it to stop?!

Another advantage of using automated tests rather than debug is that in addition to being much easier to run multiple times, it will ensure that the problem being fixed will not occur again in the future, serving as a regression test.

Conclusion

Practicing TDD is harder than they say, but also more rewarding than it sounds. Some people don't adapt to this form of work (like the famous DHH case) and maybe you and your team also come to the conclusion that this model isn't right for you. But there's only one way to find out: practicing and giving TDD a real chance. And for this experiment to be appropriate it is necessary study, training and dedication. But I promise that even if in the end you come to the conclusion that TDD is not for you, you will surely learn a lot of useful things in this process, regardless of the model you decide to follow later.

Top comments (2)

Collapse
 
dyagzy profile image
dyagzy

I have never written any TDD before and I don't know what it is all about. My senior colleague 1st mentioned TDD to me just few days ago and I decided to look up a few articles here about it and then I found your post. I must say that your article is a Gold mine. It is filled with so much background information necessary to understand the concept of TDD.

My stack is C# Asp.Net and I will like to ask to recommend relevant resources for a beginner into TDD with C# background.

Oh yeah before I forget thanks for the two books recommended am definitely get and reading both.

Thank you once again for putting together this detailed article on TDD.

Collapse
 
marciofrayze profile image
Marcio Frayze

Thanks for your feedback! I really appreciate it.

I have a Java background, never worked in the .NET world, but the two books mentioned will give you a great overview anyway, since C# is an OO language.

I recomend you to start really small, with some code katas (eg. the FizzBuzz one is really easy and some people use it to introduce the TDD concepts: en.wikipedia.org/wiki/Fizz_buzz). My advice is you to solve the problem without using TDD first, then try again using TDD (and making sure the tests are guiding you) and see the diferences in the results.

And if your colleague is available, ask for help and feedback from him if you get stuck. It's always more fun when learning with a mentor.