DEV Community

Bosco Domingo
Bosco Domingo

Posted on

The 5-minute guide to Software Testing concepts and strategy

Preface: I've recently taken it upon myself to establish automated testing practices at my current job. We're a small tech team in a growing startup and the margin for error was quite large, so up until now, manual testing sufficed and writing tests would've actually slowed us down.

It has now come to the point where automated tests are worth investing time into, however. Thus, after my research, I've decided to put together a simple and brief guide that can serve as a starting point for others treading this path. Enjoy!


Unit vs Integration vs Contract vs White/Clear-box vs Black-box vs Acceptance vs End-to-End (E2E) testing... What does this all mean?

First, tests can be classified by scope or size of the subject under test (SUT):

  • Unit testing: testing the smallest blocks of code you can think of. Depending on language, these can be simple functions, or entire classes... No single definition exists, but consensus is to keep it small and limited to 1 task/operation/functionality.
  • Integration testing: testing your components together (think 2 Units interacting with each other) or with external components (like calling a DB). External components are usually mocked.
  • Contract testing: tests for your integration tests. If your integration tests no longer match the external dependencies, these will tell you.
  • Acceptance testing: checking your entire system behaves as expected, AKA fulfills requirements. For example: "If I press sign up and fill the form asking for my details, I would expect my user to be created with those details". This requires all components and external dependencies to be tested together.
  • End-to-end testing: The difference with Acceptance tests isn't crystal-clear, and I think they are 2 sides of the same coin.

Second, they can be classified by the nature of how they test:

  • White/Clear-box testing: testing that takes into account the internal workings of the code, meaning the focus is to try all the possible execution branches, usually measured by code coverage (tested code/total code). They test how the code works.
  • Black-box testing: testing that only cares about inputs and outputs, disregarding implementation. They test what the code does.

Generally Black-box testing is viewed as a more "productive" or "useful" way to test, but this is highly subjective.

What strategy do I follow?

Generally speaking you should keep 3 things in mind:

1. Test as much or as little as you need to give you a level of confidence in your code you're happy with.
2. Only test your code.
3. Keep tests out of the way.

1 is self-explanatory.
2 means don't test external frameworks or dependencies. They should have their own tests.
3 means don't make tests cumbersome to run or maintain, or people will either skip many of them or not run them in the first place. They'll break and be an annoyance, increasing lead times.

How?

  1. Prioritise. Start with the core features, then expand as you see fit. It's better to have 20% of the code tested than nothing at all. Go for happy paths and most-common routes, then focus on edge-cases and other concerns.
  2. Black-box at first. They tend to be far easier to write, and not tied to actual implementation, so they only break if your code stops doing what it's supposed to. By extension, they also require less changes (only changed when requirements change).
  3. Use the "Given, When, Then" syntax (AKA "Arrange, Act, Assert"). Given a user wants to log in, when they enter the wrong password, then they should see an error message and be denied entry.
  4. Keep a "commit suite" and a "deployment suite". The commit suite should be the one you run after every significant change and/or prior to a commit. It should only involve the code you're working with and be extremely fast. The deployment suite should be run before any deployment (likely as a PR check, or a pre-push git hook)
  5. Only tests public interfaces. Do not test private methods, and even worse, don't make them public/protected just to test them. Only test what is reachable/callable from outside.

Have a lot of fast, easy to maintain and self-explanatory Unit tests. A few less Integration tests that ensure everything works together and a couple of Acceptance tests that are only run in the deployment suite or manually every day. Have Contract tests to ensure your Integration tests are still correct, and run those once a week or so and that's it, you're now successfully (and pragmatically) testing your code!


Disclaimer: Because I've made an effort to keep it brief, I've obviously glossed over details and nuances and made over-simplifications. This is meant as a primer, not the end-all-be-all ultimate guide to testing. For that, I recommend you check out some of the following sources:

  1. The Practical Test Pyramid
  2. Test Doubles
  3. Given, When, Then
  4. Unit Test
  5. Integration Test
  6. Contract Test
  7. Testing Best Practices

Top comments (3)

Collapse
 
emmabase profile image
Emmanuel Eneche • Edited

Great insights. The many benefits of writing automated tests, cannot be over empahsized, cos code must do what it is meant to do. Thank you for sharing.

Collapse
 
boscodomingo profile image
Bosco Domingo

Absolutely, but you know what often brings even more benefits than testing? Being able to write the code to be tested in the first place. I'm a proponent of testing, especially core business and sensitive modules, but at the same time, testing increases development time, so we should always keep that in mind and find the right balance.

Thanks for your comment!

Collapse
 
emmabase profile image
Emmanuel Eneche

Of course writing code that can be tested is one thing. Testing offer lots of benefits such that we may not need drill into details. Howerver, there's no point doing all that on a system with zero users or revenue base 😂. So yeah striking the balance as you said, is wisdom.

Thanks a lot 🙌🏻