DEV Community

Bohdan Stupak
Bohdan Stupak

Posted on • Originally published at wkalmar.github.io

It's not about how you inject your services, it's about how you test them

It has been written a lot about the value of unit testing and still a lot of developers may have witnessed codebases with unit tests being too brittle or rarely discovering actual defects in software. Also some have questioned default architectural style supposed to make code testable. These are the reasons why a lot of developers openly question unit-testing while others just silently sabotage the process of writing unit-tests.

In this article I offer my take on the aforementioned issues.

You can check out the code provided in this repository. During the course of the article I present two different designs which reside in two different branches, so feel free to explore them both.

Setup

Let us take a look at simple controller with service injected. Although I use the term "controller" I don't use any frameworks for the sake of example simplicity. Still you might imagine that we're talking about one of the popular MVC frameworks.

public class ItemService
{
    public string Serialize(Item input)
    {
        switch (input.Type)
        {
            case ItemType.String:
                return input.Value;
            case ItemType.Geo:
                return SeriazlieGeo(input.Value);
            case ItemType.Range:
                return SerializeRange(input.Value);
            default:
                throw new ArgumentOutOfRangeException(nameof(input.Type));
        }
    }

    private string SerializeRange(string value)
    {
        var items = value.Split(new[] { '-', ' ' }, StringSplitOptions.RemoveEmptyEntries);
        if (items.Length != 2)
        {
            throw new ArgumentException(nameof(value));
        }
        return $"gte:{items[0]},lte:{items[1]}";
    }

    private string SeriazlieGeo(string value)
    {
        var items = value.Split(new[] { ',', ' ' }, StringSplitOptions.RemoveEmptyEntries);
        if (items.Length != 2)
        {
            throw new ArgumentException(nameof(value));
        }
        return $"lat:{items[0]},lon:{items[1]}";
    }
}

public class ItemController
{
    private readonly ItemService _serializer;

    public ItemController(ItemService serializer)
    {
        _serializer = serializer;
    }

    public string Process(Item item)
    {
        return $"Output is: {_serializer.Serialize(item)}";
    }
}
Enter fullscreen mode Exit fullscreen mode

Now let's cover this code with tests.

public class ItemsServiceTests
{
    private ItemController CreateSut()
    {
        return new ItemController(new ItemService());
    }

    public static IEnumerable<object[]> SerializeTestData => new List<object[]>
    {
        new object [] { new Item
        {
            Type = ItemType.String,
            Value = "test value"
        }, "Output is: test value" },
        new object [] { new Item
        {
            Type = ItemType.Geo,
            Value = "45,54"
        }, "Output is: lat:45,lon:54" },
        new object [] { new Item
        {
            Type = ItemType.Range,
            Value = "45-54"
        }, "Output is: gte:45,lte:54" },

    };

    [Theory]
    [MemberData(nameof(SerializeTestData))]
    public void Serialize(Item item, string expectedResult)
    {
        //arrange
        var sut = CreateSut();

        //act
        var actualResult = sut.Process(item);

        //assert
        Assert.Equal(actualResult, expectedResult);
    }
}
Enter fullscreen mode Exit fullscreen mode

So is the controller written in a testable fashion? Sure, we've covered it with tests completely! Can we improve it somehow? I don't think so. But let us examine options we can find in some codebases that I've encountered myself as well.

Introducing interface

Some developers who are striving for loose coupling may object that injecting concrete realization into the controller is a violation of SOLID principles. Namely the principle of dependency inversion. So let's adhere to these principles and introduce an interface and inject it into service.

public interface IItemService
{
    string Serialize(Item input);
}

public class ItemController
{
    private readonly IItemService _serializer;

    public ItemController(IItemService serializer)
    {
        _serializer = serializer;
    }

    public string Process(Item item)
    {
        return $"Output is: {_serializer.Serialize(item)}";
    }
}
Enter fullscreen mode Exit fullscreen mode

Now given we don't change out tests we see them still passing. This means that we've executed refactoring and our test suite assured us that we didn't break things. Exactly why we write our unit tests!

Did adhering to SOLID somehow improved our code in this particular case? I don't think so. It was already concise and testable. Did it make it worse? Some of us (and me as well) who believe that code is a liability, not an asset think so. But is it critical? Frankly speaking, even when I'm in charge of processes but my team feels that strictly conforming laws of SOLID has its values, I'd rather adhere to the team will than try breaking it.

Side note: abstracting away volatile dependencies

Critical readers may tell that I'm battling a strawman here and they might be right. While there is not much sense in abstracting away dependency by the interface in the example I've provided this is quite a useful technique, the real benefit to testability comes when you abstract away volatile dependencies. By the term "volatile dependencies" I mean such that provide observable side effects (such as databases, email providers, etc). Using these dependencies directly in your test suite can make your tests unstable since they rely on external resources. So it perfectly makes sense to replace them with a test double that inherits injected interface.

However other techniques are possible. One of them is extracting such side-effectful interactions into separate modules while unit-testing pure logic. I won't dive into details much on that matter since there already exists excellent explanation of this technique.

It's worth noting that some frameworks come with a built-in option to abstract-away volatile dependencies not based on interfaces. One such example is EFCore Inmemory Provider.

Refactoring(?) unit-tests

So since both designs are quite fine what is the point? As some of you might have guessed from the article our attention will be devoted to the unit-test suite. As the name implies unit-tests are designed to test separate units contrary to integration tests which test multiple units in integration.

The thinking I find in many codebases is the natural unit of code is class so tests from the example are refactored as follows

public class ItemControllerTests
{
    private ItemController CreateSut()
    {
        var itemServiceMock = new Mock<IItemService>(MockBehavior.Strict);
        itemServiceMock.Setup(e => e.Serialize(It.IsAny<Item>())).Returns("serialized");
        return new ItemController(itemServiceMock.Object);
    }

    [Fact]
    public void ProcessWrapsSerializedOutput()
    {
        //arrange
        var sut = CreateSut();

        //act
        var res = sut.Process(new Item { });

        //assert
        Assert.Equal("Output is: serialized", res);
    }
}

public class ItemsServiceTests
{
    private ItemService CreateSut()
    {
        return new ItemService();
    }

    public static IEnumerable<object[]> SerializeTestData => new List<object[]>
    {
        new object [] { new Item
        {
            Type = ItemType.String,
            Value = "test value"
        }, "test value" },
        new object [] { new Item
        {
            Type = ItemType.Geo,
            Value = "45,54"
        }, "lat:45,lon:54" },
        new object [] { new Item
        {
            Type = ItemType.Range,
            Value = "45-54"
        }, "gte:45,lte:54" },

    };

    [Theory]
    [MemberData(nameof(SerializeTestData))]
    public void Serialize(Item item, string expectedResult)
    {
        //arrange
        var sut = CreateSut();

        //act
        var actualResult = sut.Serialize(item);

        //assert
        Assert.Equal(actualResult, expectedResult);
    }
}
Enter fullscreen mode Exit fullscreen mode

Now we test both ItemService and also check that the correct method of ItemService is called inside ItemController. Is this design actually better? Let's take a look.

Evolution of design

You may have noticed that some of our business logic (if you can apply this term to such an over-simplistic example) namely wrapping serialized item with supplementary text resides inside ItemController. Let's say we want to follow thin controllers principle and extract this code inside the service.

public string Process(Item item)
{
    var serializedOutput = _serializer.Serialize(item);
    return _serializer.Wrap(serializedOutput);
}
Enter fullscreen mode Exit fullscreen mode

Since we've refactored our code let's run test suite to check if we didn't break things.

What happened?

Moq.MockException : IItemService.Wrap("serialized") invocation failed with mock behavior Strict.
All invocations on the mock must have a corresponding setup.
Enter fullscreen mode Exit fullscreen mode

On the contrary, if we stick to our original testing strategy when testing multiple classes at once, our test suite will be green.

Turns out, the testing style from "Refactoring unit-tests" section represents a case of overspecified software. Instead of focusing on verifying the behavior, we're verifying implementation details which are subject to change. Such tests don't provide any additional confidence in our code but are brittle which causes dissatisfaction with unit-tests in general. Think about it this way: would your stakeholders ever care that you call the method of your ItemService two times or exactly once?

That leads us to a conclusion that when we speak about testing a unit we should think about a unit of behavior, not a unit of code! Such tests allow us to focus on important aspects of the system under test thus increasing the value of our test suite.

Side note: respecting SRP

Critical readers may observe that after we introduced another method to ItemService it started to violate the single responsibility principle. In this case, it was made solely to illustrate a case of brittle tests but generally speaking you should always take care when working with classes that have the suffix Service or Manager in their names as this is the first flag that indicates that responsibility of the class is defined not clearly enough.

Conclusion

The main goal of this article is to show that we as engineers should understand the benefits and shortcomings of the principles we're applying instead of just blindly following them. For that reason, we've taken a tour over service composition and harnessing a test suite. As we have seen following principles without understanding what they are about, leads us to brittle design. And while our first reaction may be to question principles themselves the main thing we really have to ask ourselves is whether we're applying them correctly.

Oldest comments (0)