DEV Community

Discussion on: Saving time with contract testing

Collapse
 
jillesvangurp profile image
Jilles van Gurp

Put differently, in the test pyramid, your most valuable tests are at the top and bottom. I tend to emphasize making the tests at the top count because they are expensive. So I focus on breadth and scope of those tests to cover as much as possible of the scenarios I'm interested in. With unit tests I want to cover logic. Anything that looks like an algorithm: unit test it. Regular expressions? Unit test those. Complex if/else? Unit test it. You can run thousands of unit tests in seconds and they tend to be a few lines of code at most.

People use confusing terminology like integration tests to mean lots of different things. The strict interpretation is simply: "not a unit test", where unit tests are strictly defined as test that cover small units like procedures, individual class methods, algorithms, lambda functions, etc. This however is not very useful because there are many shades of grey when it comes to integration tests.

A more useful distinction is white box and blacbox testing. Unit tests are a good example of pure white box tests: they are tied to and based on the implementation. You want every code path accounted for and you focus on covering relevant ranges of parameters and any exceptions that might spoil your day. The goal is catching logic bugs. Given that complexity of each unit is low (i.e. your design doesn't suck), you can easily cover all relevant code paths. The flip side is you don't test how units interact (queue endless amounts of gifs where unit tests passed but integration tests did not).

Integration tests near the bottom of the pyramid tend to be also white box tests. They tend to be confused with unit tests even by less experienced developers (not the same thing). Simply put, if it writes to a database and you are poking around in it; it's a white box test. If it relies on different parts of your code base and you are stubbing mocking some bits out, it's a whitebox test. A whitebox test gets you poor coverage (combinatorial explosion of codepaths and parameters) and at the same time poorly reflects what your users will do (use contractually defined APIs to interact with your system).

Also, whitebox tests are deeply tied to your implementation. Meaning that if you refactor, your whitebox tests will need attention. Having a lot of them can become an obstacle for that. I've seen projects where refactoring was infeasible because any meaningful change would require hundreds of tests to be refactored. Bad stuff. Avoid such tests and minimize their use. If you have a lot of whitebox tests that look like developers are copy pasting bits of tests: you have a huge problem in the form of technical debt.

Contract and e2e tests are typically black box tests. You test the system against published APIs (aka it's contract); typically from the outside, and preferably in a way that is similar or close to how this will be done in a production deployment. The big difference is whether you stub out other microservices, infrastructure components and third party stuff or not. In a way contract testing is similar to microservice level unit testing.

It's useful but it doesn't replace e2e or scenario oriented (or manual) testing where you verify your constellation of microservices actually get a job done end to end for the user. Queue before mentioned gifs; exactly the same problems.

For me the benchmark in this is testing things like user signup flows. This typically involves writing to databases, interacting with queues, external mail services, SMS, and time sensitive things like tokens. You might have all your contract tests passing and still have users unable to signup because something went wrong with external services, something timed out, got firewalled off, etc. You cannot discover this with just contract tests. E2e tests are the only automated way you can ensure this keeps working. Essentially all projects I've been on that had a user signup flow at some point ended up with repeated production issues related to users not being able to get into the system. In some of these projects we ended up with browser based tests (selenium) and/or deep e2e tests against realistically configured systems designed to mimic as close as possible the sequence of events in real life. E2e tests are complex and expensive; you don't want to have a lot of them but you typically need some.

Collapse
 
napicella profile image
Nicola Apicella • Edited

Great overview of the test pyramids :)
I completely share your thoughts about the need of having some e2e tests which cover all the layer of the system.
Having e2e is important to make sure all the pieces work nicely together (configs, certificates, throttling, etc).
This is definitely something you can only test by running the whole stack.

In many cases though, there is an abuse of e2e and integration tests for what apparently seem to be a lack of abstraction in the system.

In other cases, it s about not being accostumed to a richer set of testing strategies which can speed up the development process when used in the right context.
For example, I am pretty sure that lots of integration tests I wrote couple of years ago, could ve been contract tests XD