DEV Community

Cover image for Writing better tests
Carlos Gándara
Carlos Gándara

Posted on • Edited on

Writing better tests

Writing better tests

In this last chapter of this series we will be discussing some tips to write better tests.

These three posts were based on a talk that I did, which, when coming to this last part, covered several antipatterns and test aspects with different levels of complexity and value. After some time, I realized that it would be better to focus on only a few of them which turned out to be the root of the most painful situations that I've faced.

Testing implementation hurts

My brain hurts!

The headline:

Test behavior, not implementation

This is one of the most well-known mantras in testing, but, for some reason, it is often forgotten in practice. Testing implementation, although it provides a feeling of getting work done and a confidence in the short term, will be painful in the mid-long term.

The reason for this is that it makes refactor difficult, which is the most common activity for developers.

To illustrate how itchy testing implementation can be, we will check an example. We need to implement some algorithm performing a total price calculation for a checkout process. We end up with this test:

public function testCheckoutTotalCalculation(): void
{
    $itemsInBasket = [
        $this->oneItemWorth500,
        $this->anotherItemWorth500
    ];
    $repository    = $this->createMock(CheckoutItemsRespository::class);
    $repository->expects($this->once())
               ->method('fetch')
               ->with(self::BASKET_ID)
               ->willReturn($itemsInBasket);

    $checkoutTotalCalculator = new Calculator($repository);

    $basket          = new Basket(self::BASKET_ID);
    $calculatedTotal = $checkoutTotalCalculator->calculateTotal($basket);

    $expected = 1000;
    self::assertEquals($expected, $calculatedTotal);
}
Enter fullscreen mode Exit fullscreen mode

It does not look bad and it passes, so we move to the next task and life is beautiful. However, this test is brittle because it is testing the implementation and will hurt us at some point in the near future.

When implementing some unrelated functionality, such as the list of the items in the shopping basket, which uses the CheckoutItemsRepository as well, we decide to improve the repository to something such as this:

public function fetch(int $basketId): ItemCollection { ... }
Enter fullscreen mode Exit fullscreen mode

We moved to collections, instead of plain arrays, to take advantage of their map capabilities. But, our previous test is now failing.

Has the calculation of the total changed? Not at all. We should expect the same total when given the same items.

Should the test fail? No, since the calculation has not changed.

Does the test fail? Definitely.

Why? We tested implementation instead of behavior.

Extrapolating to the common scenario of a reasonably big codebase, we can end up with lots of red tests after a refactor that, by definition, should be transparent.

Testing implementation is a way of coupling -something that we should try to avoid as much as possible. Our previous test was very coupled to all the classes involved in solving the price calculation. So changes on them cause unrelated failing tests.

Having to fix a lot of unrelated tests with every change causes disbelief in the tests and the testing process. Even more, if the fix consists of changing the mock expectations to fit the new method signature or return type, you will have exactly the same problem with the next change on that piece of code.

Luckily for us, there are tools that prevent falling in the trap of testing implementation.

Test first, implement after

Is this egg tested?

The headline

Testing first helps to produce better tests and implementations

Writing the tests after the implementation will push you inadvertently to write them coupled to what you just coded. Your mind knows what you typed a moment ago, and it will trick you into mocking these two dependencies and to return the exact result that you were thinking when coding, get a green test quickly and move on to the next thing on the list.

We just saw how undesirable these kind of tests are. After some sessions of fixing a bunch of broken tests due to changes in the code unrelated to them we can start to wonder if this whole testing thing makes any sense at all.

In my experience, writing test before the production code is the best practice to avoid the obnoxious brittle tests tied to implementation. By testing first, you are forced to think about the behavior before the implementation. You will also focus on solving the problem at hand instead of jumping from one implementation detail to another, losing the track.

In the example shown before, the thinking process when implementing first could be something like this: "First, I need the items, I can grab them from this repository given the basket id... we don't have this kind of finder yet, let's implement it (...); ok, now I grab the items and... wait, but if have no items from repository it should throw an exception.... doing that is not allowed (...) ok, the repository throws the exception and iterating over the items and calling this getter I can just accumulate the total in this variable... wait, better I can use a mapper, yeah, way cleaner. Now, let's write the test... after I remove the variable, it is no longer used".

When testing first it is easier to focus on the behavior: "I have two items, each worth 500, the total should be 1000".

You may end up with a very similar implementation, but a way more decoupled test that is more resilient to refactors. It will look something like the following:

public function testCheckoutTotalCalculation(): void
{
    $this->repository->add($this->oneItemWorth500)
    $this->repository->add($this->otherItemWorth500)

    $checkoutTotalCalculator = new Calculator($this->repository);

    $basket          = new Basket(self::BASKET_ID);
    $calculatedTotal = $checkoutTotalCalculator->calculateTotal($basket);

    $expected = 1000;
    self::assertEquals($expected, $calculatedTotal);
}
Enter fullscreen mode Exit fullscreen mode

When talking about testing first it is kind of inevitable to talk about Test Driven Development (TDD). I have two comments about it.

First, keep in mind TDD is not the only test-first methodology out there. You can test first without following the red-green-refactor loop.

Second, there is an increasing number of TDD detractors lately. I think TDD is an amazing tool for developing and strongly believe that the main arguments against TDD are derived from the antipatterns covered in this article, plus from taking educational examples as strict rules.

There is plenty of literature on the web explaining TDD with simple cases that have not much in common with what you usually deal when writing actual software. If you take these examples and apply the same patterns with no analytic thinking on how good or bad they are for your current code, you may easily find yourself in a trap of poor tests.

Let's move now to two antipatterns strongly related with testing implementation.

Avoid the mock-aggedon

Armageddon!

The headline:

There is no need to mock all dependencies

A flaw of internet content written about testing is how over zealously it is suggested using mocks. Plenty of articles out there encourage to mock all dependencies to isolate the unit that you are testing. I see two inconveniences here. First, the mock-all-the-things policy. Second, the implicit choice of the unit under test to be a single class (hence the over mocking). As the latter is the next topic to discuss, let's go now with the mocks part.

Test doubles are a crucial part of tests at unit and integration levels. You should use them to narrow the reach of code exercised to only get as far as the unit being tested requires. Let's illustrate this with examples:

  • When you are unit testing the user registration use case, you will mock the database access because it is not the behavior you are testing.
  • When you are doing integration tests of the persistence of the user registration use case, you will mock any access to external systems besides database (e.g. Facebook integration to get user email).
  • When you are integration testing the Facebook communication in the user registration use case, you may mock the database access*.

* It might make sense to do not mock the persistence access in a case like this one.

However, what is encouraged in a big part of the literature out there is to mock all the collaborators the class you are testing has. This is another of the roots of brittle test suites that eventually will cause frustration during refactoring periods.

Considering the following class:

class InvoiceProcessor
{
    private RateRepository $rateRepository;
    private InvoiceRepository $invoiceRepository;
    private MoneyConversor $conversor;
    private MoneyFactory $moneyFactory;

    public function process(InvoiceCollection $invoices): void {}
}

Enter fullscreen mode Exit fullscreen mode

Besides some suspicious design decisions taken for the sake of making up an example, what should we mock here when unit testing the processing of invoices?

Assuming the repositories are accessing the database to get the current rate exchanges and the invoices, we should mock them. The interaction behaving as expected should be tested elsewhere. Here we are focusing in the behavior of invoice processing and we don't care about how any persistence mechanism manages to retrieve the data. This way, we keep our unit test isolated, fast, and cheap to execute.

Now, assuming MoneyConversor just takes some numerical rate and applies it to a money quantity to convert it, should we mock it?

What about MoneyFactory, which just takes a couple of scalar values for a quantity and a currency to build a Money object? And what about the InvoiceCollection passed as parameter?

If they do not go outside the boundaries of what we are testing, and apparently they do not, we should avoid mocking them.

Why? If we mocked all of them, we would be testing implementation and unnecessarily coupling the unit under tests with the concrete classes used to accomplish its expected behavior. Any refactor in an all-mocked scenario would result in tests failing due to changes in implementation details. This is not desirable.

When we code a test, we must think of our unit under test as a black box. We know the expected outcome for a certain input. The transformation of one into the other is the behavior that we expect. How the code is implementing this behavior is something we should not care about when writing the test. It is inside of the black box, not at sight. When we are forced to know something about the internals of the black box is when one of its parts communicates with something -a database, a message broker, the time, etc.- that we cannot control or afford to control. Those parts are the ones that we need to mock, and no more than that.

This leads us to the question of how to define the limits of the black box? And, what is the appropriate unit to test?

Choose the test unit wisely

Eggscellent

The headline:

Do not create tests for all your classes but create tests for all your units

But what is a unit? The answer is... wait for it... it depends.

However, there is at least one concrete answer to what is not the appropriate unit to test: every single class. Always following a 1:1 relation between a class is another way of coupling tests to implementation. Sometimes, we will have tests matching an exact class, but it is not because of some rule carved in stone, but just because it makes sense.

This 1:1 rule is somewhat encouraged by the literature in the web, because in a simple example, without context, it is a valid approach. In a more complex application we will face several situations where it is not.

Consider the scenario of a class with a big public method, fully and well tested: the tests cover the behavior, not the implementation. Something not so uncommon when using a testing first approach (wink, wink).

The method is too big and we refactor parts of it to other classes. A good practise. Now, should we unit test the new, smaller classes? Aren't they already tested in our first tests for the initial big class?

We have two options: repeat the tests that we have already narrowed to the smaller classes or mock the small classes in the initial class test. Both options will turn the test brittle, coupling them to implementation, and making them difficult to further refactor.

Another possible scenario is our InvoiceProcessor class which uses the Amount value object. This Amount class is used in other parts of our application, maybe covering some logic not needed in InvoiceProcessor (eg we need addition of amounts but not subtraction when processing invoices). In this case it would make sense to provide tests for this single class.

Deciding what is a unit in each case can be tricky. A clean architecture helps a lot.

Recently, I've realized how handy it can be to test the domain logic establishing the units to test at use case level. This concrete application is following a layered architecture, where each use case has a handler in the application layer. The handlers are the clients of the domain, invoking the appropriate domain methods to accomplish the use case. I've set most of the units to test in these handlers, so the domain logic is tested through them, and there are no tests for concrete domain entities. This way I can accomplish big refactors in the domain with the security that the application will behave as expected as long as the handlers tests are green.

However, the previous example may not be suitable for other scenarios. Analyze and choose accordingly to each situation needs.

Summarizing

The best summary are the headliners:

  • Test behavior, not implementation.
  • Testing first helps to produce better tests and implementations.
  • There is no need to mock all dependencies.
  • Do not create tests for all your classes, create tests for all your units.

It has been really interesting to write this post series. I hope you liked it as much as I did. Don't hesitate to leave any feedback in the comments!

Further reading

Top comments (2)

Collapse
 
agustingomes profile image
Agustin Gomes

I'm relatively new to writing tests, but the approach you suggest for not mocking the repository means a database needs to be up in order to run the unit tests, which kind of defeats the purpose of testing in isolation. am I missing some details about how you do that test?

Collapse
 
xoubaman profile image
Carlos Gándara

Hi Agustin, thanks for your comment.

You are correct, when doing unit tests the access point to database should be mocked. Database interaction is outside the boundary of the unit we are testing.

The approach suggested is to do not mock it when doing integration tests that are testing this communication.