DEV Community

Jan Van Ryswyck
Jan Van Ryswyck

Posted on • Originally published at janvanryswyck.com on

Fast Feedback

Test-Driven Development is a discipline that exists for almost two decades now. Unfortunately, to this very day, it is still not without controversy. Most professional developers know that writing some form of automated tests can be quite beneficial for any type of codebase. But what still seems to be somewhat controversial is whether to write a test before or after the production code has been layed out. This discussion seems to stir its head every other day on many discussion forums and on social media.

Some people firmly follow the mantra of “Red, Green, Refactor”, writing a failing test before making it pass by adding a line or two of production code. Others don’t like to follow this strict process for whatever reason and prefer to write tests after they’ve written a part or the complete implementation. Personally I like to write my tests first. But I also would like to emphasize that I’m not writing this article to pass any judgement over anyone. For this article, I would just like to approach this topic from a slightly different angle.

Whether someone writes their tests before or after the implementation isn’t really a useful discussion. What I do believe to be valuable is the concept of a short feedback loop. This is what moves us forward. Before we dive into the topic of feedback loops, I would like to mention a small anecdote.

I once interviewed a software developer who claimed that he was a firm believer of TDD and the test-first approach. During the interview we sat down together and wrote some code. After I explained the requirements, he opened up a new source file, and started writing a simple unit test for a new method on a domain class. This new method would ultimately verify some invariants and change some of its internal state. The first unit test he wrote verified the happy path scenario. As this new method wasn’t implemented yet, the unit test obviously failed. Writing this failing test only took a couple of minutes. Then he switched over to the domain class. He did not only write just enough production code to make the unit test pass. He also wrote the full implementation for the method according to the assignment! This took him about 20 minutes in total, writing lots of code. He then ran the single unit test from before, and it passed. Obviously there where several test cases that simply weren’t there. Not only wasn’t he really following the process of “Red, Green, Refactor”. He just completely missed the point of having a short feedback process.

The term “feedback loop” is defined as follows:

“ The section of a control system that allows for feedback and self-correction and that adjusts its operation according to differences between the actual and the desired or optimal output.

— The Free Dictionary

For me, the most important aspect of a software system is it’s ability to provide feedback as fast as possible. Do we have a working system? Does all of it’s parts integrate correctly? Can we deploy the system right now? Does it run correctly in any of the environments in wich it is deployed? Does the system serve its users well? These are all important questions. But the really nice part is whether we can answer these questions with the blink of an eye. The same holds true while we’re actually working on the software itself.

Not only do we want to know whether the production code conforms to my current understanding of the business requirements. We also want fast feedback about the design of the system as well as the correctness of the unit test itself. We should never trust a test until we have seen it fail, regardless of whether you write a unit test first or after the fact. We should always make sure to see the test fail for the right reason. Writing tests first is more efficient than writing tests after the implementation because we start out with the premise of seeing a test fail. But I’ve also seen some developers, although not many, who have become quite proficient in writing tests after the production code while ensuring another form of short feedback process.

They first write a small amount of production code, then they write a test. They see the test pass, comment out the code they just added and see the test fail. Then they uncomment the code again and see the test pass (again). Again, this is somewhat less efficient and it might even take more discipline than simply following the traditional “Red, Green, Refactor”. But this still qualifies as a valuable feedback loop. So no judgement there.

What’s most important is being able to quickly verify that we’re making progress, that we’re heading into the right direction. That is what I find so appealing in Test-Driven Development.

But why stop there? Test-Driven Development is the best approach we have on most well-known platforms that we use. But there are other forms of feedback loops that can be highly beneficial as well. Take the tooling around the Clojure programming language for example. They have this thing called REPL-Driven Development where a developer constantly executes small pieces of code in a REPL. Sometimes, a few automated tests are added afterwards for regression or integration purposes. But most of the time, the calling code from the REPL is lost when its session has been closed. I truly find this process of REPL-Driven Development fascinating as code is being exercised in mere seconds instead of minutes. This approach can be applied for most LISP-family programming languages.

If you’re interested in finding out more, this excellent article manages to provide a good first impression about this process. I must admit that being able to witness a developer who is proficient at using Clojure is even more impressive.

All things summed up, for me the most valuable indicator for evaluating new technologies, methodologies and approaches is their ability to provide me with fast feedback. Learning, discussing and improving these feedback loops is what matters most.

Top comments (0)