Well if it's TDD, then tests are written first. But I do "cheat" on few occasions and write the code before writing the test(s).
IMO, the "TDD laws" can be bent sometimes but not too often and you need to be careful when.
But all in all, tests have to be written and cover as much logic as possible (though 100% coverage goal is not an option in reality).

 

Do you write pseudo code for the tests? Or are you thinking before you write your code that you know what will be there?

 

That's a good question. I usually follow a similar path as the one John mentioned, which is what TDD is all about from my perspective:

  1. Write down what behavior I expect the test to do, what dependencies it might need. (You can "write" it in your mind as well as long as there's a clear picture of what is needed).
  2. Then I write down the test to assert that behavior.
  3. And then the code.

TDD doesn't eliminate the need for design consideration, quite the opposite.
TDD is about design and that's why your question is very important - it's not just the notion of writing tests first and code later.

 

Personally, I sketch out what functionality/logic I'll need (typically putting it down as checkboxes in a GitHub issue I've raised for whatever I'm working on) for the feature; I sort of concurrently build up both the production code and the associated unit test - fleshing out the latter's tests as the former calls for new error handling scenarios.

I'm still trying to get better at TDD and I'm pretty flexible with the "laws" in practice.

 

For me, it's about an 80/20 split between writing tests first, and writing the implementation first.

If we're going to dive into strict definitions. It's not TDD if the tests don't come first. The first 'D' implies that writing a test drives your code in a certain direction. Writing a test after writing the implementation it verifies doesn't drive your code anywhere - it just tells you things you already know.

As for why I write tests first so often: It's mostly a habit at this point, I've been doing it for so long that it's just how I write code nowadays. Writing a decent chunk of code without a test or two covering it genuinely makes me nervous...

You're also totally right that, in a whole bunch of cases, it's impractical to write tests first. And it's in those cases that I put on my code-cowboy boots and just start rattling out code to verify later. But, I also believe that the cases where it's impractical to write test-first code are much less prevalent than many would suggest.

A good example is when you're writing some complication function, but you haven't worked out the approach you want to take. It's very common at this point for developers to abandon writing tests, in favour of "feeling out" the problem and testing it once they know it's right.

All I would suggest in the situation is to write a coarse grained test that at least verifies that the expected output is returned for a given input - forget the implementation details. Then let it fail. All day if it has to, and use that test as your measure of whether your implementation is correct or not.

There's no need to write a finer grained test if you don't want to. But simply coarsely covering the behaviour of a function means you'll spend much less time debugging and console.log()-ing as you write your implementation.

Like any tool or skill, TDD has many nuances to it, and it takes years to master (if it's possible to master at all). What I've found after a few years of pushing myself to write tests first is that overtime, the cases where I feel like I can't write tests first become fewer and further between.

 

When my environment is set up well to do so, I usually write my tests as I code. Some come before I write the code, some come after. It really depends on how the flow goes. If I'm creating a new class I might:

  • Create the class file
  • Then create an accompanying test file
  • I'll think of the first public method I want to write and write the test for it's basic case

Then everything from there is a back and forth of varying order. My flows have been getting more structured lately so I could see eventually doing strict TDD.

 

I don't use TDD, but I definitely write tests. Just to note that there are differences between the two.

Using a slightly different meaning of "when", I write tests when there's a behavior that I want to be sure of. I think thinking about why you write tests is more productive than strictly thinking whether you should follow the rules or not.

I always ask myself, what do I want to make? How can I make the API clean? How can I assert that the behavior is correct? What are the steps necessary so that I'm able to verify that? What information do I want to get from those tests?

At this point, I'm just thinking, designing, and jotting notes. No code (not even test code) is written, except for maybe sketching out the possible APIs and see which I like best.

Afterwards, I'm going for the implementation first, to make sure I can write an implementation of the API that I designed. Along the way, I keep asking questions: if I write it like this, can I still verify the behavior? Is there something necessary on the implementation that I missed to consider on the design phase? If so, I can just iterate and improve the API with the new knowledge that I gathered.

If I think the implementation is done, I write the tests to verify what I want to be sure of. This is usually the point where I notice I make stupid mistakes on the implementation logic, thankfully before it goes to production.

On another note, I also write tests when I work with a library or piece of code that I haven't come across before. At a company where I work we have some DB access code auto-generation mechanism via annotations, and I remember writing tests for the auto-generated code (which I technically didn't need to do, as the code-generator were heavily tested by the responsible authors) to make sure that I understand how it works. Most of the time these are throwaway tests that I don't commit to source control.

So there you have it. I don't write tests first, but I design and code with tests in mind. Some might say that this is in a sense TDD, but TDD purists might argue that I don't actually follow the formula of red-green-refactor religiously.

 

It's the first time I heard of "red-green-refactor" catchphrase, but isn't it supposed to be "red-refactor-green"? 😅

 

I believe it's really red-green-refactor, as Wikipedia and several other articles confirm:

Each test case fails initially: This ensures that the test really works and can catch an error. Once this is shown, the underlying functionality can be implemented. This has led to the "test-driven development mantra", which is "red/green/refactor", where red means fail and green means pass. Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring.

The principle is that, if we're in the red, it's not possible to refactor confidently, because you don't know whether the current code to be refactored is correct, or even exists at all!

 

Yep! I'm sure you've heard the old joke: "In theory, theory and practice are the same. In practice, they're not."

I find that lots of people disagree strongly about how/when/why tests "should" be written, but mostly agree on how to handle specific situations.

If I'm debugging, then the first thing I do is write a test to detect the bug.

If I'm building new functionality, I generally alternate between writing tests and writing application code, like so:

  • Write some super-simple implementation of what I need.
  • Write a few tests to make sure I have the basic functionality right.
  • Fix any bugs I find.
  • Write tests for edge cases, error conditions, etc.
  • Use those tests as a checklist for the rest of the work to be done.

So you can see that for the initial "happy path" functionality, I often write the code first, then the test. And then for edge cases and error conditions, I write the tests first.

And I often write test cases after doing new development to help support refactoring and maintenance efforts.

Sarah Mei's article on Five Factor Testing is excellent. She talks about fitting your testing practices to your team goals.

 

To me it doesn't really matter when tests are written, as long as code is written with testing in mind. The strict definition of TDD means that tests are written first, but it's not the perfect workflow for creating something from scratch. At least in my experience.

With that said, I do add asserts to my code as I'm working. Mostly to make sure functions don't regress. Once I've figured out how I want to structure what I'm trying to do, I'll go back and add formal tests.

 

I´m creating two sorts of tests.

First I write my technical tests (database, connection to external systems, webserver start, scheduling, etc..). Sometimes I just copy these tests from old projects.
After that I use the user stories as definitions for my business tests. Its very helpful to find problems inside a story.

 
 

Normally, a couple months after the functionality has been written and we find out nobody understands the codebase anymore. I heavily prefer TDD or BDD though, it just doesn't seem to work in low budget startups.

 

It depends on what I'm working on. If I have to do something quick and I already know all the possible cases that could go wrong, I either write them at the end or not at all. Otherwise:
1. Write Test
2. Test Fails
3. Write Code
4. Test Passes
5. Go To 1

 

Long before I ever heard of the concept of testing, I had the habit of modifying my main() to check whatever code I'm working on. This was not "formal" testing - I was just running the the function I was working on and printing the results - but I was writing code that runs my code in various scenarios to see how it acts.

Nowadays I'm trying - as much as I can - to integrate this workflow into the proper testing. I'm already writing code that "tests" my code, so I just need to write it in a test function instead of in main(), and after I see that the results are correct I'm changing the test from printing them to verifying them with assertions.

 

When i use TDD, me cycle of dev (code, test) is much more fast, but i don't write tests before code too often, i think i should start, so i prefer write tests before, but the true rule of thumb for me is using a bug report as test case to guarantee that the same bug will not return AKA regression.

 

I do not ascribe to TDD as a good design philosophy, so I almost never write tests first. I'm also a functional programmer. So for business logic, I generally write exhaustive tests... not just hitting every happy path, but also testing for every possible error condition. Since I'm testing pure functions I don't have to test for the universe of IO errors -- just business level errors -- so it's not as daunting as it sounds. I haven't yet gotten into the realm of property-based testing, although I hear it is nice there.

 

While TDD can help you think about how you are going to design things, it is not a design philosophy in of itself. The thing I love about TDD, is that it makes testing part fun for me, and replaces the time I would have used to see things running in the debugger. People often think that because you do TDD you don't need to think first about how you are going to design or layout your code. This is wrong, you still need to do some kind of high level design to know where things are going to go, but not get too stuck with your initial thoughts or plans if you see the need to make changes as you go. Often I find testing the most difficult part of coding, and forcing you to get this out of the way first is one of the reasons I love TDD.

 

Personally I use TDD, so I write my the tests first.
TDD is a little hard to understand at the beginning, so if you're new to the testing world I would advice you to start with BDD, i.e writing the code first.
Anyway, it doesn't really matter when you write the tests as long as they are good :)

 

I find personally I do my best when I write the tests first. For many years I was a test last developer and my tests where pretty bad.

 

I do test but i write test last. No TDD.

I am writting a react app and I am still in doubt about inlining styles or sending them to the scss file. I change my mind about the of internals and add and remove instance variables until I am happy with my code.

Since my progress is that caothic writting test would only slow me that. Maybe pseudocode test would be fine but i dont do that. I just focus on my file maybe I write a some of guiding comments that I remove as my program progresses.

If my process wouldn't change that much I could write test first. Maybe i will eventually do so. This project is new and maybe thats why my process is so caothic.

 

In general, if I have a strong knowledge of the libs I´m working with and not doing any kind of "spike" development I try to start with tests, but if it´s a new library I´m Integrating with my code, etc, then usually code > tests.

 

I cheat a bit, depending on the size of the project. I'll usually get some sort of working prototype built out, more like an interactive model with a ton of stubs and TODOs. This helps with Agile workflows because..well, a ton of reasons.

Once i've got that model, then it's time to set up my tests and everything is usually test first from that point on.

One additional productivity trick I've learned over the years is making sure I've got a test set up and ready to go right before I leave work for the day. My brain is thinking about that test, how best to tackle it, and when I come in the next morning I've got a very clear achievement goal laid out for my morning work.

 

I often practice TDD when doing bug fixing. I would add a new test that simulates the filed bug with an expectation that it will fail.

I will then proceed to provide the fixes and once done I will rerun the test and this time expecting it to pass. Occasionally I would do the same in implementing new features provided there's a clear requirement of it.

 

If you know the problem domain and environment well - you good to write the unit/integration/functional tests before development.

Otherwise write a trivial e2e test first and keep discovering the nuances. The rest of tests could be written later

 

My previous project, I was getting into the TDD swing of things. Doing it "by the book": writing the failing test first, then implementing, then passing the test, refactoring, still passing, check-in. Genuine honest-to-goodness unit tests (about 70% code coverage, and the entire unit test suite took under a second to run; the integration test suite took about 650 hours to run).

I'd like to say "it was heavenly"... and it was... but only because of the tools involved. It was a C# project, using Visual Studio, NUnit, and NCrunch. If not for NCrunch, I fear it would have been miserable. But NCrunch is so amazing, it made writing unit tests and doing TDD a joy.

(Disclaimer: I do not work for the NCrunch company, nor am I affiliated with them in any way.)

My current project is a C++ project. Build times are on the order of 10 minutes for changing a single C++ source file, and about an hour for a full build. I don't think TDD would work for this kind of environment. And there's no NCrunch equivalent for C++.

 

I organize code retreat from time to time and I have been trying to practice TDD whenever I can and here's when and why I would write tests

1) when there's bug in production, I would ask is there missing unit tests?
2) when I would like to know more about functionality of the code, I try to write unit tests to make sure the code works as I expected
3) depends on how difficult it is for me to write tests for the piece of legacy code, it would tell me how much time I need to work with the code base in the long run.
4) the naming of acceptance tests/unit tests tells me how much shared understanding about the system between QA/Dev/Product Manager and this would reduce the costs of maintenance of the system very much
5) Writing tests first helps me to write simplest/minimum code that satisfies the tests. This is very true for me as I'm very lazy guy
6) As I learn more about writing tests, I realize more ways to approach problem incremental and how to deliver business value with the least amount of effort and delay as many technical decision as possible

Having said that there are still many problems with writing test first that I'm still trying to find answer

1) how do I tests build/automation/deployment scripts such as gradle, ansible, etc ... I often extracted build logic to POJO classes where I have been able to write tests and have seen reasonable success at that. However, it's not very easy for me to write tests for gradle scripts, ansible scripts. I think I would probably see more innovation in this space probably in the next 10 years.

2) I'm lucky to work with object oriented languages where it's not that difficult to break dependencies and lots of tools to help out. I'm not quite sure how easy it is to work with non-object-oriented languages.

3) As I learn more about tests, I think more and more people would like to write more acceptance tests as living specification of the systems, and I think most of the acceptance tests can be pushed down to the lowest level of tests (unit tests in Mike Cohn pyramid tests, medium.com/@ttrungvo/yet-another-i...). However the current set of tools (junit, cucumber, etc...) doesn't allow me to specify the behavior/context of the acceptance tests/specification. I hope I can see more innovation in this area.

4) It's not that easy for me to be able to write unit tests right away in the first place, I have to read a lot to learn how to break dependencies, do lots of screen casts to get better understanding at this. How can we find easier way for mass adoption? I think that because of the state of tools/practice at the moment, it's not easier for people to write tests, and this is where it prevents people from adopting TDD.

 

I pretty much follow TDD over 90% of the time when writing production code. I have tried not doing this, and it always results in a big mess for me that I need to clean up later, so I have learned the hard way not to skips tests. Here are a few cases where I do not follow TDD:

1.) UI Code - if it is look and feel stuff it is pretty hard to unit test this
2.) Hard To Test Code - there is a piece of code that interfaces with a external API that is not under my control. I usually refactor the hard to test code into it's own class so that I can test the rest of the code with out issues
3.) proto types - if I am just trying to hack around to figure out how to use some new API or just want to investigate something with the debugger running. Though you can also use tests to do some of this work. What I usually do after this investigation work is finished, is rewrite everything with TDD and use the prototype code as a reference as I am working.

One thing I love about doing this too, I avoid the pit fall of creating one test class for every class I have. I what i try to do is create an uber class that has everything, then as I see the need split things out into smaller classes as I am in the red, green, refactor cycle. It helps prevent coupling my tests with my code and allows me more freedom as I refactor and clean things up.

 

I follow what I like to call DDD -- "Design-driven design" -- in which I attempt to produce an elegant, OO design for the solution (RoR by the way) with small, DRY, single responsibility classes and zero (or very few) code smells.

Along the way I'm thinking about whether the names of classes and methods make sense, and whether the function and purpose of each item of code would be clear to a reviewer.

This seems to inevitably leads to easily testable code, and the unit tests in particular are then very easy to write and very comprehensive.

 

This would make a great interview question, wouldn't it? Interestingly, over the course of about 10 recent interviews for jobs that said on their listing that they want applicants to know TDD, nobody has asked that to me.

For me, in practice, I usually write tests along with the section of code I'm working on at the time. I'll typically write out a stub or mock for the test first, then the actual code and then fill out the test.

In some cases where I'm not sure about the direction the end code will take, I'll do the code first, filling in the test as I go. For example, I'm not sure if an algorithm or design is the right solution to the problem so I'll experiment with the code before fully building out the tests.

Sometimes, I'll have a really good idea of where the code is going, for example, working with the output from a stored procedure. In that case, I'll put together the test first, complete with mocking data.

 

I manually test my APIs using postman and I'll write tests only after completing a module if the Project is large or after the completion of the whole base endpoints if the project is minimal.

 

In my personal experience, basically as soon as I get hired, haha. I've gotten a gig or two by being the only person who knows what they're really doing wrt testing in iOS.

Classic DEV Post from Jul 27 '18

Python 2 VS Python 3

There can be only one.

Software Developer.