Recently I've been spending a lot of time thinking about automated tests.
Why do we write them? Well the benefits are obvious:
- They prove your new code works
- They continue to prove your new code is still working, even as it ages and drifts into obsoletion...
- They act as documentation that must be updated, providing valuable insights to the next developer who needs to maintain your code
So then why do so many companies struggle to establish coherent strategies given these benefits? Why are there so many instances of "we tried it, but it didn't really work, so we're trying this instead"?
Well, is it any surprise really?
How can you talk with both your fellow devs and non-devs about the different approaches without getting into a quagmire of confusion?
Unit tests, Integration tests, Regression Tests, Feature Tests, White Box, Black Box and more...
Given, When, Then. Arrange, Act, Assert.
And the always famous "How big is a unit in a unit test?"
This post is an attempt of trying to put into words the things I've learned and the perspective I've gained to cut through the mess and put all of the confusion into a single framework.
All in the format of a conversation with myself from the past if you will permit... ;)
1) Drop the unit tests and integration tests
What?
I know, I know. Advising you drop tests - crazy right? But hold on!
My advice is not to stop writing the tests you already are, but to drop the terms and baggage that comes with their connotations.
Stop thinking about unit tests as things that test your classes.
Stop thinking of integration tests as things that test your services.
Instead, think about identifying what it is you want to test and talk in those terms. Don't write unit tests and integration tests, write:
- Code level tests
- Build level tests
- Subsystem level tests
- System level tests
And then treat this test entity as a black box. What this means is for:
- A class you do not write tests that rely on private state/asserting certain paths of logic are followed
- The build of your service you do not write tests that will assert the flow through certain classes
- A subsystem you do not test the interactions of/state of the individual services that make the sub-system
- The whole system you do not test the interactions of/state of subsystems/services
Phrased another way, we only want to write our tests based on four things:
- What we input into the test entity
- What the test entity returns
- What the test entity outputs into its dependencies
- What the test entity can receive as an input from its dependencies
By doing this, you'll be able to drop the shackles of unit tests and integration tests and start talking about these test levels as gates to the next stage in your CI/CD train!
- Code level tests pass? We can create the build!
- Build level tests pass in an emulated deployment environment? We can deploy it into an environment!
- Subsystem/System level tests pass in the deployed environment? Our environment is healthy - we can promote it to the next environment!
- Subsystem/System level tests pass in the deployed environment? Our environment is healthy - we can promote it to the next environment!
- Subsystem/System level tests pass in the deployed environment? Our environment is healthy - we can promote it to the next environment!
- To however many you want...
2) Embrace that you can test four things
No matter the size of your test entity, you can always test four things related to that entity:
- Quality (An aspect of the entity-under-test that isn't directly tested but can be analysed by metrics)
- Features (Something the entity-under-test does entirely within itself)
- Integrations (Something that the entity-under-test does by interacting with other similarly sized entities)
- End-to-end (Testing that complex multi-step use cases generate the result that is expected, but strictly only when the state is contained within the entity-under-test)
Okay? What does this mean? Can you give examples?
Sure:
Code level tests, such as testing a class/react component
-- Quality - this is stuff like running a code linter to check the code is in line with standards
-- Features - this is the equivalent of testing a sorting algorithm, where all logic is contained in the class
-- Integrations - this is the equivalent of testing that your class interacts with other classes correctly
-- E2E - if you're writing this kind of test here then your code is probably badly written to be honest. But there are some valid use cases, such as testing an object that implements the Builder pattern so it still applies!Build level tests, such as testing a NodeJS docker container
-- Quality - this is stuff like checking the image does not contain your dev dependencies
-- Features - could be something like testing that a particular element is visible on a webpage
-- Integrations - could be something like testing that given a POST, a row is created in a database
-- E2E - could be something like uploading a png and then downloading a watermarked version from your service, where the image is kept in the container's local file systemSubsystem level tests, such as testing your new NodeJs image deployed with the database it uses
-- Quality - this is stuff like how is the performance? What are the error rates like?
-- Features - could be something like testing that "get product info" returns JSON (but not testing that it used certain tables in your database! It's a black box!)
-- Integrations - could be something like testing that your subsystem fetches the stock of a product from another subsystem
-- E2E - testing things like we can create product followed by get productSystem level tests, such as testing a social media site
-- Quality - similar to subsystem level tests, this is stuff like how is the performance? What are the error rates like?
-- Features - could be something like testing that the search feature has auto-suggest
-- Integrations - I lied earlier. You can't actually test this one! More on this soon...
-- E2E - testing things like upload picture, rename picture & tag friends
Alright, but why no integration tests at the system level?
Wait. First recognise that you can now talk about the different aspects of an entity-under-test with a fellow developer. That you have strong clear definitions to back them up. Awesome right?
Fine, I can see that, but why no...
Okay okay! I get it already. I understand. Why no system level integration tests... Well it's simple really (but can require a certain amount of perspective shifting to see it) - the system does not interact with anything of a similar size.
It is made up of subsystems that interact with each other, including any third party ones you might use!
Eh?
By integrating your own sub-system with another third party one, that third party system has become a part of your system.
Without it, you don't function.
I'm not sure?..
Alright, try thinking of it this way.
Let's say you wrote a system that upon ordering food, it posts to DevTo.
You write some Integration tests at the build or subsystem level to check that it interacts with a mock DevTo service as expected - great!
We still have to use the real DevTo though and check that things continue to work when we're not using the mock.
Well, you can define your food sub-system & DevTo as a sub-system itself! That is, DevTo can be thought of as part of the internal state in your entity-under-test. It's no longer two entities talking to each other but rather two entities thought of as one.
By this absorbing aspect, anything that's used by your system becomes part of your system.
Summary
So that's it! Automated tests can be described in generic terms. Provided you pass all of the tests you've written under the four catagories, you can confidently move your entity-under-test to the next stage. You can move from a class in the source to a build. From a build in a per-build environment to a per sub-system environment. Per sub-system to the whole system. Rinse and repeat as many times as you need to move to the production environment!
Now comes the hard part. Establishing common understanding with my peers based on this framework...
Top comments (0)