DEV Community

Stef van Hooijdonk for Coolblue

Posted on

Testing

The software we create should do what it was intended to do. To be sure that it does, we want our production code to be well-covered by automated tests. These tests should be runnable with the click of a button, and they should be run automatically before each release.

Although testing, when done right, should ultimately make you more productive, by virtue of having to spend less time fixing problems after you release, it does ‘slow you down’ up front, as compared to writing code without any tests. Most applications or services have a long lifecycle that warrants having extensive test coverage, but given the nature of some of the work we do, some applications or features are so trivial, short-lived, or time-critical, that having few or no tests is an acceptable situation.

The metric of ‘code coverage’ is not considered of any value to determining the quality of tests. Consequently, we should not use it to fail builds or block deployments. The reason behind this is that tests can just trigger your production code, but are not testing the right business concepts. Next to that, some units might not even need to be tested. These units can for example only properly be tested using an integration test or might be too trivial and covered by their consumers.

TDD at Coolblue means:

Process

  • Write a failing test first, then code until that test passes. Do not write any more production code that it is necessary to make the one failing test pass.
  • Red-green refactor. Don’t forget to refactor, it’s the most important part.
  • Everyone on board. The whole team must adopt TDD, do not partially adopt.

Writing tests

  • Consider the inputs, outputs, all possible weaknesses (possible errors) and strengths (successful runs).
  • Do not overcomplicate your tests. The test should be simple to setup and execute.
  • Tests should run and pass on any machine/environment. If tests require special environmental setup or fail unexpectedly, then they are not good unit tests.
  • Make sure your tests clearly reveal their intent. Another developer can look at the test and understand what is expected of the code.
  • Each test should have a limited scope. If it fails it should be obvious why it failed. It’s important to only assert one logical concept in a single test. Meaning you can have multiple asserts on the same object since they will usually be the same concept.
  • Keep your tests clean. They are just like your production code.
  • External dependencies should be replaced with test doubles in unit tests.

Running tests

  • Unit tests should be fast, the entire unit test suite should finish running in under a minute. Unit tests should just run with zero effort (after installing dependencies).
  • Ratio of number of tests in each level of testing should be balanced; pyramid model. (e.g 80% unit tests, 15% integration tests, 5% acceptance tests).
  • Acceptance tests should be divided in suites based on features.

Measurements of success for teams

  • The integration and unit tests are passing before merging.
  • Acceptance tests are running successfully in the Acceptance environment before code is deployed to production.
  • Unit/Integration tests run within CI environment on each PR.
  • Failing tests block deployment. All tests run successfully on a CI environment before code is deployed.

Note: ‘Coverage’, i.e. the percentage of lines covered by a unit or integration test, is not a measure of success, because it says nothing about how well the code has been tested. Do not make a specific level of test coverage a requirement, because it is hard to reach and it will cause people to start writing nonsense tests just to reach the required level of coverage.

Suggested resources

  • Read Test Driven Development By Example by Kent Beck
  • Read Clean Code by Robert C. Martin
  • Read xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros
  • Check training videos at https://cleancoders.com
  • Working Effectively with Legacy Code by Michael Feathers

Top comments (0)