markdown guide

One of the biggest "aha"'s I've had was when working on a legacy codebase with very few unit tests.

I needed to change the behavior of a couple of methods, but I wasn't super sure what their current behavior was to begin with.

A lot of folks would reach for their debugger and spend the next while meticulously stepping through the code. But that felt slow to me...

So I put together a suite of tests around them (prior to making any code changes) to verify and document what they were really doing. Once that was done, the overall change was as simple as adjusting a couple of assertions, causing those tests to fail and reimplementing the methods to meet their new expectations.

It's one thing to see TDD as a way to confidently write a brand new feature, it's a whole other ballgame to see it as a way to confidently navigate and tease apart legacy code. It was like being in a super dark tunnel, and finding a flashlight. Literally everything got easier.

And don't even get me starting on working on an old codebase with a good suite of tests. Pure. Bliss.


Same here. Was implementing an embedded client of a client/server system. I did write tests but not TDD. Then, our backend developer left and I needed to take over the code. It was a complete mess, it was slow, Spaghetti code and no tests. Developer horror story. Then I wrote test by test to freeze the current behavior, found and fixed tons of bugs and threw away half of the code after refactoring.
From that moment on I was in love with TDD. The next project I did was all TDD: Zero bugs in production.


Fantastic example. Sounds like a great way to both help you understand what exactly the code is doing, document it, and keep it safe for future devs.

Can't say I've ever taken this approach before, but I'm gonna try it out next time I'm working on modifying some legacy code.


Very well said. I agree. I love the peace of mind i get from having a set of tests to fully exercise all the code i'm working on, or some legacy code i'm trying to understand.


I'm kind of a coding puzzle fanatic, and CodeWars has TDD built in, I've found it's way easier to teach people TDD after they've used CodeWars because they've already used testing!


Oh no, another seemingly great source of knowledge for me to sink into, rather than study! I'll check this out in the next few days, thanks!


Initially, it didn't click because I was introducing testing to Legacy code, which wasn't very test-friendly...

The first time we had a brand new class in the code, and it was started with testing in mind - it was amazing.

And then what really brought the point home was when we started changing that class and breaking tests(in a good way). That feeling of confidence while introducing breaking changes was what got me hooked.


I don't use TDD in my normal coding workflow because it's not something that makes sense in that context (still waiting for that aha moment, but I expect it will never come).

For fixing bugs, I think it was when a more senior developer ran me through the process and crystallising the conditions that caused the bug made it seem very appealing.

For those who regularly use TDD, can you explain how it's meant to work for greenfield work?


I'm currently working on a project which we started back in Jan 2015, greenfield, with TDD through and through.

It's an API, and the first test we wrote was to successfully do a GET to an endpoint. Say /v1/user. Then we add the route, create the view and the serializer (it's a Django REST Framework API) and we get a response. Completely dummy at first, since the test only expects a 200_OK status_code. Then we add an assertion for the content, which must be an array. Now we create the model, with some fields, and make sure the wiring on the serializer is in place.

Later on, we would be adding tests that represent a bunch of business rules, like if you create a Foo object a related Bar object should exist in the database, with a given set of characteristics. And that's how we keep the system evolving, adding new features and modifying behavior, make sure everything else is in place.

EDIT: For a more practical example, one can have a look at this test file which is from a personal project of mine. Even though it's not based on the library I mentioned above, this one uses Flask, I took the same approach with it. Started with the most basic test, then moved bit by bit, by writing tests that would depict the outcome I wanted.


I can see how it would make sense for a REST API where you have a defined endpoint and response format, and actually not a use case I had considered TDD for, nice one.

My issue with TDD is how it's meant to apply, if at all, to a task when you're adding a high level feature and TDD implies you write tests for the units (functions) first. How do you know what functions to have or do you go full waterfall and do all the design up front? In that kind of case, I go for TLD but TDD advocates really don't like that and I'm not sure why.

It's hard to describe it without a clear example, but I can assure you I have created and evolved dozens of high-level features using TDD. And, yes, it's true that TDD needs clear and defined requirements, but that does not mean that these requirements are final.

You see, if you have any given requirement, fully-detailed or not, and you need to get implementing it, you will always have a scenario to implement, if it was not defined by the Product Owner (or similar role) you will take a guess and go for it, right? So, you create a test for that scenario. Make it pass. Validate the feature with the P.O.

Oh, that's not what he had in mind? Then ask him to point it out how it is not. Now you have a clear requirement. Modify the test. Change the code. Validate the work.

TDD, at its core, is about incremental change. Don't take giant leaps of faith, instead take small, clear-view, steps.

Finally, if you could provide me with the scenario you have in mind I could help you out with it. Hit me up on Twitter. :)

I don't think I've ever heard TDD talk about incremental steps, it's always been the focus on at the unit level.

I don't have an example at the moment, this is mainly from my last job where I couldn't figure out how to fit TDD in for new features. It was still very much a start up in how the day to day went so devs were given a feature to implement and that was it. Aside from the requirement, there wasn't even a function to call so you couldn't really write a high level test at the start.

You seem to be confusing unit testing with TDD.

Unit tests are one type of tests, the bottom layer of the Test Pyramid, which also defines the Services and UI tests.

Whereas TDD means Test-Driven Development. Simply put it says that you should write a test first, and only then you should write the code to make it pass. After that, you should refactor, add a new assertion (requirement), making it fail again, and write the code to fix it. It can be any kind of tests. An integration test that talks to a 3rd-party system, a unit test that mocks all external calls but asserts that the inwards of an endpoint behave properly or an automated UI test that validates user input and the error message display feature.

It seems people these days have not read Test-Driven Development. They end up getting to know TDD from different places, with different levels of adoption, etc. I highly recommend anyone to read it. It is simple, clear and small. And on top of that is super effective. I usually name it as one of my career-changing books (the other one being The Agile Samurai).

This is all good - one quick clarification: a Unit test does not necessarily test a function/class/object/whatever.

It tests a unit of behaviour.

TDD does not mean adding one test per function/class/whatever. It should mean writing a test that describes the behaviour of your program, and then writing code that implements that behaviour.


Running isolated tests is easier and faster than running the whole application. I basically never run the application itself while developing new features and I basically have no idea how GUI looks like to get to the feature I am currently implementing.

TDD is likely a must with this approach.


That's a good thought. TDD might be less necessary beneficial test speed suite speed were not a thing. There are definitely tools (like guard in Ruby land) that complement TDD really nicely.


Could you please elaborate for dummies like me what is guard in Ruby land please?

Guard is a gem -

It has plugin like guard-rspec.

What Guard does is it watches a number of files and restart your server when needed (changing Gemfile, routes etc).

Guard-Rspec add to that by watching your test files and automatically running them when you save

and restart your server when needed (changing Gemfile, routes etc)

I did not start the server in years. Why would I need to ever start a server when I have tests?


Remember those meetings where developers answer the infamous question "how long will it take?" with absurdly short times and the deliver with absurdly inflated delays?

That is where TDD helps. You get to enjoy a clear picture of what to build, how much time and effort it requires, and a solid understanding if what you build makes sense.

Looking back at those meetings where I also promised deliveries, what I really wanted all the time is to loudly ask a few questions:

  • what do you want from me?
  • can you edit all those tickets and actually specify what you really need?
  • how am I supposed to give an estimate when you do not know what you want or just take a stab into the fog?

Not happening these days thanks to TDD.


TDD gives you a clear scope of the things to come. It basically works like making a shopping list before going shopping.

You know the exact scope what is needed, you can clearly express when changes exceed the agreed boundaries and thus would negatively impact delivery time.

Are you saying you write your entire test suite before writing any code? I thought tests were usually written before writing each individual method.

No, you do not write your entire test suite first. That is impossible. You will never get a ticket, task, project in a state which would allow this.

You write tests for as long as you can answer questions coming up. Then implement and verify. If that produces further questions, you produce further tests to implement and verify. Until all questions are answered.

Then you should reach a state where you can either clearly say: this makes sense. Or: this is crap, let's go back to the drawing board.

To make things simple, think of TDD like making a food plan for the next week(s). You come up with a generic plan as to what kind of vegetables, meat, etc you want.

From there on you drill down to actual recipes, then to actual cooking.

TDD is like that but without food. Sadly.

Sorry if this sounds dense. I'm not familiar with the official TDD methodology other than the concept of writing tests before coding (I have tried that, and found it's useful in a lot of cases). But this just sounds like making a top-down design and project planning in general, which doesn't (necessarily) require TDD.

Is it just that, when you finally do coding part by writing tests first, you find that it's more likely you're able to follow the plan and stay within the schedule? Or does TDD prescribe specific methods of project planning and top-down design?

No worries, these questions are the right ones to ask. If you consider TDD you have to verify if that is the right approach!

TDD is a good complementary process for design and project planning. It helps you proof check what you planned, or even see if something was planned at all. Moonshots are not uncommon, TDD would reveal them.

TDD definitely lets you stay within the schedule most of the time, and after a while, you can even be much faster than planned because TDD also trains your brain in doing the least amount of work to achieve the goal.

TDD does not require planning at all. You can start a project with just a moonshot, a bunch of ideas, and then use TDD to map these out. And this is probably the essence of TDD.

Whatever you build, TDD gives you an approach to turn an idea/plan into something actionable that can be verified and quantified. In answer "will it work" on a technical level.

And while we're at it: keep asking. If anything I learned that asking questions where you feel like these might be stupid is a great way to understand things and not stupid at all.

That really depends on you, and the language you want to learn in.

One resource I recently stumbled over is Test with Go. While Go may not be your preferred language, this course does a quite decent job of introducing test concepts, and thanks to how Go works, you also get to see both the good and the ugly sides of testing.

For getting into the spirit of TDD, I would recommend researching the topic of code katas. It is basically you repeatedly implementing a generic thing in a language of your choice using test driven methods. Many cities have local code kata groups, which I would recommend as TDD works best when challenged by another person.

E.g. having a colleague to implement code you wrote tests for and switching with each iteration is a great experience.


By pure observation of mobile teams I've been in contact with I've noticed how much TDD is not a thing.

To be frank TDD is quite hard when a UI is involved but you can still practice it on the behaviors I guess (don't know much about iOS development :D)


It is very difficult concept to wrap your brain around. I like the idea of it, and I’m very interested in learning it more throughly but in practice its just not clicking.

Yeah I get that. Sometimes I do TDD and sometimes I write tests after. If I write tests after I write the shells of the various tests so I won't forget to check them later.

TDD helps me a lot with refactoring and with legacy code. I don't always practice it when I'm exploring.

Maybe you can go back to the source and read the "original" TDD book (still quite expensive but worth it): Test Driven Development: By Example


When I realized that writing test-able code resulted in me writing better code regardless of the tests I wrote. Just by ensuring my code is easy to test meant that my code was more focused, organized and well designed.

"Ugh, this massive class is going to be a pain to test. I better break it down into smaller, more composable pieces to make my life easier" => Single Responsibility Principle

"Ugh, I don't want to have to mock out these constructors and static methods. I better pass those dependencies in to make my life easier" => Dependency Inversion Principle

That's S OLI D - 2 out of 5 key design patterns just to make my life easier in the short term. How can you not follow something that gives you that?


TDD also puts strong pressure on the developer to do the L for Liskov's Substitution Principle, because mocking / fakes requires it.

And it even puts some light pressure to do the I for Interface Segregation Principle... depending how deep the code goes into DIP.


Very good points as well!

I'm struggling with linking Open/Closed Principle to TDD. Anyone else want to step up?

They seem orthogonal IMO.

Perhaps we can say that a module that's closed for modification is not going to see as much rewriting of tests. In other words, the need to throw away tests could be a signal that you're not making as much use of inheritance as you could be. (Though perhaps not a useful signal, since if you're modifying the code then it's already obvious that it isn't closed, and there might also be a good reason for that.)

For the open-for-extension part -- if you're inheriting from something, you don't need to touch its tests, so I wouldn't expect any interplay with TDD.

To be honest, I think that the other parts tie in is just due to the fact that TDD is most practical when unit testing small, decoupled modules, and if your code is organized around classes, that mostly implies SOLID. In other words, to the extent that TDD and SOLID correlate, modularity might be a mediating factor.

I say this because I think you can argue that in the FP paradigm, SOLID-like properties are easier to hit just as a matter of course, in which case TDD wouldn't have as much architectural impact. (That said, there's certainly nothing preventing one from writing large, tightly-coupled functions.)


So my 'aha' moment is a bit of an outlier, because I leaned to code using TDD! It's weird for me to think about writing code without there being some sort of expectation about what it's going to do - even if it's just 1 + 1 - I expect that to do something even as I write it.

Tests are those expectations written down, and TDD is writing those expectations first.

I'v written a lot of bad tests, too coupled, too slow, - I'm no expert, but I'll always try and write a test first.


For me it was more the other way round where I got the gist of TDD, but breaking bad habits it hard. It was hard to break the habit of diving straight into building the function/feature instead of writing the tests first. But once I made the conscious effort to write tests before building the feature, I started seeing my programming skills going up a notch. To the point now I feel like a fish out of water when I don't write unit tests.

The thing that TDD forced me to do (as a junior dev then), was to question the purpose and outcome of the task and feature requirements. Only then can I write my unit tests. If I can't even write unit tests for a task, it means that I am still unclear of the task's requirements. This is especially true in larger companies where the chain of command may be long and with that greater chances of errors and misunderstanding in communication...


Although I've been solving code katas for quite a while they didn't bring me the aha moment - even though I was following a TDD. But when I was working on a data migration tool where quite complex but very well specified conversions had to be implemented, there I had it. I think it would have taken much longer to implement and debug those tasks without TDD.


It's not quite TDD, but I've been having a lot of fun doing something similar while writing Clojure code. In Clojure-land it's easy enough to connect your editor to a networked REPL. I'm in Vim & using fireplace.vim means I can execute whatever code is under the cursor. For (e.g. function) definitions, executing means that the REPL ends up containing the definition. So you're just always coding in a live environment -- a little like Jupyter notebook.

The upshot is that while I'm writing something, that file will tend to have real code interspersed with test code (specifically, the one or two unit tests that I currently care about), and it only takes a couple of keystrokes, with no task-switching, to re-run them.

Coding that way is a very tight loop, and similar in feel to TDD (or at least the sort where you take care to only focus on a few fast-running tests, so that running them doesn't impact your flow).


When I started doing Go 🐹, because it's already built in.

BYO = Bring Your Own

In most other languages it's BYO πŸ”§, and most times it's a nightmare to setup, maintain dependencies, or even πŸƒπŸ½β€β™‚οΈ or πŸ‘€. Also since it's BYO... opinions, styles, etc range widely.


My 'aha' moment was during Roman Number Kata.

I wan studying UnitTesting and implement them on a project without a test. No matter effort put on tests, coverage never reach more that 45%

Then some one recommend practice TDD with Roman Numbers it will be awesome made refactor without fear and finish with 100% coverage so easy.

So, I'll recommend this kind of Kata exercises and your life will never be the same.


Is hard to know what made all things click... but I think some of the most important for me were...

  • Organizing my code in use cases (this makes easier to know what to test)
  • Decoupling my code from frameworks (this makes tests fast)
  • Not testing every class but testing every use case
  • When I started to mix the The Magic Tricks of Testing by Sandi Metz with the use case approach.

My TDD aha moment is when i realized most of "professional projects" don't need TDD at all.

Your code matters, not tests.


when i realized most of "professional projects"

Citation needed

Your code matters, not tests.

This is a very short-sighted point of view


If you're testing a broken abstraction, your test doesn't work.
Tests could help you refine your abstraction, but test doesn't create the abstraction itself.
That's why focus on good code first, good tests come out later and naturally.


For me to really get TDD, it all started off with building just a single function that did one stupid little thing.

Write the test for the function to get the expected value.
Watch it fail.
Make it pass by writing real code.
Make it fast.
Rinse and repeat.

Starting off with just building simple single function utilities really helped me understand TDD. Then as I started doing more React work and using the test renderer, my goals turned into:

Write the text to get 'x' rendering
Do all the previous steps.

Then I started using Enzyme's 'simulate' function... And so on... And so on...

If anything, I wouldn't say it was an "a-ha!" moment, but more of a pedagogical progression from small pieces to larger pieces.

I will admit that since I'm in the habit of doing this now, the modules, components, tools, any code that I write tends to be much smaller and highly testable.


A few months ago someone reported a bug on an web api that I built. The first thing that I ask was the data that they were trying to send. I took one of the integration test, copy it in another file, put the data the I was given, executed the test and the thing did not failed. I was confident that the bug wasn't in there, so I took a look on the source code of the website where the error occurs (I'm technically still part of their team), turns out the bug was in there.

And that was the first time that tests help me hunt down a bug at work.

Note: Even though I trusted the test, I was still considering that it could be something wrong with it.


My "AHA" moment was when I realized it was like exercising... it's a lifestyle(ish)

  • Just like exercising, I don't want to do it because I'm lazy. But I'm well aware of the future benefits and ROI. Future benefits like making refactoring easier, or easier to onboard people and have them contributing without worrying too much about regressions, and it can also make development faster depending on what you're implementing algorithm, API, etc.

  • I have to plan ahead and make time for it; ie. it's like squeezing in a work out by forcing myself to wake up an hour or two earlier. So when giving or negotiating time estimates I have to factor this in (lots of padding).

  • I have to make goals/game plan, ie: target weight or body fat == targeted code coverage or amount of tests.

  • Just like when ever I'm trying to get back into shape/form, the beginning always sucks. But for me I just keep focusing on my goals and the exercise or test that is in front of me at the time.

  • Just like exercising, sometimes I miss a few days or break routine now and again; it's okay, it happens. But at least I'm cognizant and I try to get back on track, and it's always easier to get back on track if the time interval wasn't that long. This holds true for tests, sometimes we have to stop or skip to get things done or moving to hit deadlines. But as long as there's a pattern and some tests in place it's a bit easier to get back on track (this is generally true, not always).

  • I also use my exercising habit/pattern as motivation to continue testing. In example, I tell myself if I can force myself to workout I can force myself to write these tests.


My "AHA!" moment has been growing over the past few weeks or so, so I'm still working on getting comfortable with it. But there were several things which all worked to get me hooked:

  1. Spending an afternoon setting up a small suite of integration tests for Orchid which just about doubled its test coverage (~20% to ~40%). Integration tests just for coverage aren't all that useful by themselves, but it did feel quite nice and does give me a small amount of assurance in the codebase that I didn't have before. And having a bit of testing infrastructure in place makes it easier to write more focused unit tests.
  2. Realizing that these integration tests actually documented the behavior of several of Orchid's most foundational plugins quite well, better than just descriptions in Markdown. Markdown docs are needed to tell the story, but as the old saying goes, "a test is worth a thousand words" (or something like that)
  3. When I discovered that one of the plugins wasn't working correctly and I never noticed because Orchid's own documentation didn't hit that edge case, I found I could hand-craft that particular scenario much more easily and and ensure it is fixed properly and stays fixed. This is opposed to setting up the scenario in Orchid's own docs site, fixing the problem, then removing that stuff again, which leaves me with no long-term assurance of the fix.
  4. Orchid is getting big and full build/test runs are starting to be too long for quick, iterative development. In contrast, running individual tests in IntelliJ is very fast and scales much better.
  5. There was a particular data structure I set up early in Orchid's life, and I never really used it much because I couldn't ever get it working properly. But when I fully tested that class I not only fixed those problems, but discovered it is even more useful than my original plans for it and am finding all sorts of really great ways to use it and make Orchid's internals more consistent.

I'm a rails developer, and early on I always struggled with the granularity of my tests for a long time. Should I test every method, model, controller, associations, callbacks etc. Then I read the BDD Introduction article by Dan North, and that really helped me understand that it wasn't about trying to test every little line, but rather broader "integration" level tests.

It helped me realise that my service objects should be treated as black boxes whose interfaces are what I tests directly, without worrying about the implementation. And surprisingly when I did follow that approach I found that my code coverage was more that usual, my class design improved, and I also had a lot more confidence in my test suite.


Working with legacy code, wishing the previous developer had done TDD.


For me, the magic wasn't in design, but rather in Test Driven Refactoring. I was refactoring a legacy enterprise system with no real tests while needing new and old code to remain simultaneously functional in situ. Using TDD as the foundation for my "retrofit" process, the team are more trusting of the new code, and that bleeds right over to user buy-in. In the right operation where this sort of project comes up, TDD refactoring is nothing short of SDLC life extension.


I am also still waiting for the aha moment.

I do write a lot of tests for logic. And tests often do drive some changes to logic. For example, mapping out possible inputs and outputs makes me realize I forgot a case, or need to handle a specific one differently. But I use tests more as regression insurance, and typically write them after the fact.

Automated tests are still code, so they are only valuable if they are simple. Otherwise, tests could also have bugs. I draw the line at testing my tests. :-)

Ponder: if tests could be simple enough to not require tests, then why can't the code itself also be that simple?


Three things made TDD click for me.

The Jim Coplien and Bob Martin Debate TDD.

Integration Tests Are a Scam presentation by J. B. Rainsberger.

Using the right tools that made TDD not merely tolerable... but actually fun. Those tools being .NET platform, C# programming language, Visual Studio, NUnit, and NCrunch unit test runner. And strict separation of integration tests and system tests from unit tests.


I don't do TDD regularly anymore, but back when I was working on an application that did a lot of it, it helped that for feature tickets (and some bugs), most of the business analysts would write the request in cucumber logic. Being able to see functionality phrased that way definitely helped me connect what the end goal was with the actual tests.


For me it was after I've solved a problem I could not solve using two hours design session.
I've wrote about my experience here:


It’s always eluded me because most of what I do rn is on the frontend and UI testing in the browser is just one big question mark for me


What made TDD click for me was Uncle Bob's talk on The Transformation Priority Premise. It showed me why TDD works and how to actually approach it.


What does TDD stands for? Testing {something} {something}?


Test-Driven Development.

It's a design technique for software engineers, which inspires simple design and ensures basic correctness.

It's not about the testing, despite having the word "test" in the name.


Amazing how many people here confuse TDD with 'writing tests'.


My TDD aha moment was that not everything has to have a unit test and if you approach your app hardcore dogma TDD, you end writing ugly code that is too clever for itself.


TDD does not prescribe "everything" has a unit test.

Classic DEV Post from Sep 4 '18

Who's looking for open source contributors? (September 4 edition)

Please shamelessly promote your project. Everyone who posted in previous weeks ...

Ben Halpern profile image
A Canadian software developer who thinks he’s funny.

DEV is where software developers grow up

Sign up (for free)