DEV Community

sourcehawk
sourcehawk

Posted on • Updated on

Improving your skills by writing tests

In this article I go into detail on my approach to testing, and my thought process behind each step. What I try to emphasize in my writing is not only the general consensus of each type of test, but also, I aim to show you how it can benefit you in becoming a better programmer and how tests can teach you to write better code.

Hopefully, by the end of this article, you will be more motivated to write good tests, and understand that tests are about much more than just validating your code's logic.

Sounds interesting? Let's dive in.

What to test

First off, lets talk about what you should test, because let's be real, more often than not you probably write tests as an afterthought to meet someone else's requirements or because your CI pipeline failed when the coverage dropped by a marginal percentage.

Generally any public facing code should be tested. This includes functions, classes, modules, and interfaces. Tests should be first class citizens in your codebase. They should be written alongside the code they test, and they should be maintained and updated as the code changes. They should be run automatically as part of the development and build process, and they should be run frequently to catch bugs early.

The idea that "I don't have time to write tests" is a fallacy and a short-sighted view. Writing tests saves time in the long run by catching bugs early, preventing regressions, and making it easier to refactor and maintain code. Writing tests also forces you to think critically about your code and its behavior, which leads to better code quality.

You might not agree with me yet on this point, but hopefully at the end you will be of a similar opinion.

TLDR

Testing public facing code not only ensures its reliability but also sharpens your ability to identify and isolate important code paths. This practice encourages you to think deeply about how your code is structured and how it will be used, improving your overall coding skills.

Considering testability while writing software

Having identified what to test, it's essential to ensure that your code is designed with testability in mind.

If writing you tests is super complex or hard it is most likely because the software was not written while considering it's testability, and frankly, it is more likely than not bad code at that point.

Use principles like dependency injection, modular design, and clear interfaces. Write clean, maintainable, and loosely coupled code to make it easier to test. Use tests as a tool to improve the quality and design of your code.

Testing can be hard, yes, but the fact that you had to consider how testable your code is while writing it, already makes you improve your code by a mile.

TLDR

By prioritizing testability, you inherently write cleaner, more modular code. This not only makes testing easier but also enhances your ability to design code that is flexible, maintainable, and easier to understand.

Unit testing

Having considered testability from the start, you have layed a solid foundation for effective unit testing, which is the next crucial step in ensuring code reliability

The goal of unit tests

Unit tests aim to validate individual components or functions in isolation to ensure they work correctly. They are the first line of defense against bugs and help maintain code quality. These are also the first tests you should implement, before integrating the new functionality into existing code.

These are often the tests that save you the hours of debugging a hard edge case later on.

More often than not, by the time I have written my unit tests, more than 95% of bugs have been fixed and I am very confident in the fact that the implemented behavior in question works as expected. Writing unit tests requires you to think critically of the internal logic and usage of your functions.

Many tend to skip them because they change often and are tedious to implement. However having the test in place when you make a change in your code requires you to analyze the effect the change has on the rest of your code. Be it validating that the function signature is still valid, such as the function's docstring, argument names or outputs, or checking what code uses the function and needs to be adjusted. These are all things that have a much more widespread effect on the quality of your codebase than you might realize. And this is only a part of what unit tests provide.

Here is an example of questions I might ask myself while writing unit tests. All of these not only help you write the tests, but also improve the quality of your code.

  • What are the valid inputs for this function?

    • What are the expected outputs for these valid inputs?
    • Have I documented these expected outputs?
    • Does my function return these expected outputs?
  • What are invalid inputs for this function?

    • What are the expected outputs for these invalid inputs?
    • Will/should the function raise an exception?
    • Have I documented the this behavior?
  • Do I need to re-examine logic while debugging tests to understand what the code does? Any part of the code that is not immediately clear from reading the logic should be commented.

TLDR

Writing unit tests demands a deep understanding of your code’s internal workings, which naturally improves your ability to write precise, efficient, and error-resistant code.

How to write unit tests

In all of my code examples, I will be using python as it is generally an easy to understand language even though you might not be familiar with it. In this section I am going into a bit of detail, but that is mainly because I want to highlight the fact that even though your code may do what you say it does, doesn't mean it is entirely correct.

Let's say I have implemented a function that adds two numbers together as follows.

def add(a, b):
    """
    Adds two integers.

    :returns int: The sum of the two numbers.
    :raises TypeError: If any of the inputs are not integers.
    """
    return a + b
Enter fullscreen mode Exit fullscreen mode

Looks good right?

Let's write a unit tests for the add(a, b) function's success case with 3 sets of inputs and their corresponding expected outputs.

import pytest
@pytest.mark.parametrize(("a", "b", "expected"), [
    (1, 2, 3),
    (-1, 1, 0),
    (0, 0, 0),
])
def test_add(a, b, expected):
    assert add(a, b) == expected
Enter fullscreen mode Exit fullscreen mode

Now, without having written a failure case, are we sure that the function works as expected? Do you see any immediate problems with the add(a, b) function above?

Try to think of an edge cases that might break the function before reading on. If you can’t find the answer right away, try analyzing the function using the questionnaire described in the section above.

If you haven't figured it out already, lets write a unit test for the failure cases according to the function's description, testing all combinations of a and b given by parametrization of the test below.

@pytest.mark.parametrize("a", [1.5, "1", None])
@pytest.mark.parametrize("b", [None, "2", 5.5])
def test_add_raises_type_error(a, b):
    with pytest.raises(TypeError):
        add(a, b)
Enter fullscreen mode Exit fullscreen mode

Do you see the problem now?

One combination of the parametrization above will fail the test, namely the combination of a=1.5 and b=5.5. According to our function's description, the function should raise a TypeError if any of the inputs are not integers. This is a very simple example, but it both illustrates the power of parametrization and the point that you should always write tests for both the success and failure cases of your functions, and while doing so, try to cover as many edge cases as possible. This is what makes your unit test valuable.

To fix the function, you could add a check for the input types and raise a TypeError if the inputs are not integers, or you could allow both integers and floats as inputs, depending on the requirements of the function. Renaming the function to add_integers would also be advisable.

Trying to test these kinds of edge cases using only integration tests is more often than not much harder. You are no longer thinking of the possible edge cases of specific functions, as they are out of scope for your thought process at that point.

Integration testing

Once you have ensured that individual components work correctly through unit testing, the next step is to verify that these components work together seamlessly through integration testing.

The goal of integration testing

Integration tests ensure that different modules or services work together as expected. They catch issues that arise from the interaction between components. Just as importantly, writing integration tests makes you think critically about how your code exposes it's functionality and how your code is structured.

Integration testing is a very crucial part of writing maintainable code because at no other stage are you forced think critically about the design of your code at an abstract level unless you've trained yourself to do so (as you should!).

Once I've started with integration testing, I may realize that I got some things wrong on a more abstract level than what the unit tests cover. I might need to restructure what I implemented to make it more coherent or easier to use. I might even realize that I don't need a specific function at all or that the idea behind my implementation was flawed. This is a good thing, as it allows me to catch these issues early on, before they become a problem which later magnifies into a bigger issue such as spaghetti code or bad design, which inherently becomes much harder to address once the code has been deeply integrated into existing software.

At some point, once you've written enough integration tests, you will naturally start questioning the abstract details of your designs and while doing so, you are already a much better programmer.

Here's an example of questions I might ask myself while writing integration tests which not only help with figuring out what to test, but also improving the design of the code.

  • Am I missing any functionality? Ensure the end user can do everything he intends to with what I've implemented
  • Does this function need to be exposed to the end user? Decide whether the functions should be public or private
  • Am I following best practices? Look for any chances of refactoring or implementing of code patterns that are more coherent.
  • Do I myself need to look up how to use my own functions while writing integration tests? If yes, then the function naming probably needs to be more intuitive. If I wrote the function, I should know how to use it without reading the docstring. If it is not clear to me, it will definitely not be clear to anyone else.
  • Will I myself need to read my code later on to understand what it does? If yes, then the function signature (docstring and argument names) is not clear enough. I wrote the function, I am responsible for writing a coherent function signature.
  • Are the function inputs practical for the intended use case of the function?
  • Are the outputs of my exposed functions practical to the end user?
  • Are the exceptions raised by my functions easy to interact with?

TLDR

By writing integration tests, you develop a stronger intuition for how different components should interact. This practice reinforces your ability to design robust, cohesive systems and improves your ability to think abstractly about software architecture.

What does an integration test look like?

This is a pretty simple example of an integration test. In this example, we are testing a component that interacts with a real database.

def test_integration():
    # Create a real database and insert a user for the test
    real_database = RealDatabase()
    real_database.insert_user(User(id=1, name='Alice'))

    # A component which interacts with the database
    component_to_test = Component(database=real_database)

    # The component is tested with a real database
    user = component_to_test.interact_with_database(user_id=1)
    assert user.id == 1
    assert user.name == 'Alice'
Enter fullscreen mode Exit fullscreen mode

Note that an external system such as a database is not necessary for a test to be considered an integration test. You can more often than not think of your code as a hierarchical structure, where the top level is the end users interface and the bottom level is the smallest unit of code. An integration test is a test that verifies the interaction between either two or more components at the same level in this hierarchy or verifies the interaction between two or more levels (or both). This can be an interface that interacts with another interface on a lower level or at the same level, or a function interacting with a database/file system/network etc. An integration test can also mock specific levels of the hierarchy to test the interaction between the levels that are not mocked. This is illustrated in the diagrams below.

code hierarchy in integration tests

What does an integration test NOT look like?

In a scenario where you mock out the entire interaction with other components, you are not writing an integration test. You are writing a unit test. This is a common misconception. Only if you are testing the interaction between two components, you are writing an integration test. If you are testing a single component in isolation, you are writing a unit test.

def test_is_not_integration():
    # A mock is used to simulate the database
    mock_database = MockDatabase()
    mock_database.interaction.return_value = User(id=1, name='Alice')
    # A component which interacts with the database
    component_to_test = Component()
    component_to_test.database = mock_database
    # The component is tested in isolation as the database is mocked
    user = component_to_test.interact_with_database(user_id=1)
    assert user.id == 1
    assert user.name == 'Alice'
Enter fullscreen mode Exit fullscreen mode

Hardware testing

While integration testing ensures that software components work well together, it's also crucial to test interactions with hardware components in environments where software interacts with physical devices.

This is a more niche form of testing which is not often talked about. Not all software engineers deal with hardware on a day-to-day basis and as a result the topic is less out there.

Some of the lessons in this chapter are very applicable to other areas of software development as it deals with structuring code to accommodate for tests, mocking more complex behaviors and validating functional requirements with tests.

Goal of hardware testing

Hardware tests are a type of integration test that ensures that software interacts correctly with hardware components, validating functionality, performance, and reliability. A common form of hardware testing is HIL (Hardware-in-the-Loop) testing, where software is tested against a simulated hardware environment using real hardware, such as by the use of a test bench.

More often than not during development, these systems will be mocked with state machines or other forms of simulation with as close to real-world behavior as possible. The hardware tests close the gap between the software simulation and the real-world hardware, ensuring that the software behaves as expected when interacting with the hardware.

What you want to test here is not only that the direct interaction with the hardware works, but if possible, test that the assumptions made about the hardware, in the software mocks, behave the same as the real hardware. If you are able to validate the assumptions that your mocks make at the level where the software interacts directly with the hardware, you can more confidently say that the rest of your software will behave as expected when interacting with the real hardware.

TLDR

Hardware testing pushes you to think beyond just the software. It helps you understand how your code interacts with physical components, making you more aware of the practical, real-world impact of your programming

Mocking hardware (an example of accounting for testability)

As an example, lets say you have a motor in your system. Initially, before we ever consider testing, we may have an implementation which looks something like this.

Initial motor interface implementation

For development purposes, and to enable us to test our system independently of a real motor, we implement a state machine that mocks the behavior of the real motor controller. Here's a state machine that simulates the states of a motor, take your time to analyze it.

Motor controller state machine diagram

In simple terms, the motor state machine can be:

  • turned on and off from any state
  • set into Driving mode from the On state when the speed is greater than zero
  • cannot go faster than 50 and cannot drive backwards (less than zero)

Along with this state machine, the code must be structured in such a way that the state machine mock can be swapped out with the real motor controller without changing the rest of the software. This is where the power of interfaces and dependency injection comes in. By using interfaces, we can define a common interface for the state machine and the real motor controller, allowing us to swap them out without changing the rest of the software.

Here is a diagram depicting the change in design required to accommodate an implementation which enables us to continue testing and using our software without the need for real hardware.

Accomodating for testability

Take your time to study the code example below together with the state machine diagram.

We create an abstract motor controller interface which both our real hardware, and the mock hardware can implement.

from abc import ABC, abstractmethod

class MotorController(ABC):
    """
    A controller to interact with a motor.
    """
    @abstractmethod
    def set_speed(self, speed: int):
        raise NotImplementedError("Method must be implemented")

    @abstractmethod
    def get_speed(self) -> int:
        raise NotImplementedError("Method must be implemented")

    @abstractmethod
    def turn_on(self) -> None:
        raise NotImplementedError("Method must be implemented")

    @abstractmethod
    def turn_off(self) -> None:
        raise NotImplementedError("Method must be implemented")

    @abstractmethod
    def is_on(self) -> bool:
        raise NotImplementedError("Method must be implemented")

    @abstractmethod
    def is_driving(self) -> bool:
        raise NotImplementedError("Method must be implemented")

    @abstractmethod
    def is_off(self) -> bool:
        raise NotImplementedError("Method must be implemented")
Enter fullscreen mode Exit fullscreen mode

We create our mock motor controller. It uses the state machine we created. (There are plenty of state machine libraries out there to actually implement the state machine, however I leave that out of this example)

class MockMotorController(MotorController):
    """
    A mocked implementation of a motor controller which works
    together with a state machine to mimic a real Motor.
    """
    def __init__(self):
        self.__state_machine = MotorStateMachine()

    def set_speed(self, speed: int):
        self.__state_machine.set_speed(speed) # trigger a transition

    def get_speed(self) -> int:
        return self.__state_machine.speed

    def turn_on(self) -> None:
        self.state_machine.on() # trigger a transition

    def turn_off(self) -> None:
        self.__state_machine.off() # trigger a transition

    def is_on(self) -> bool:
        return self.__state_machine.state == 'On'

    def is_driving(self) -> bool:
        return self.__state_machine.state == 'Driving'

    def is_off(self) -> bool:
        return self.__state_machine.state == 'Off'
Enter fullscreen mode Exit fullscreen mode

Lastly we implement our Motor interface, which is injected with a MotorController, allowing varying implementations of the interaction with the Motor.

class Motor:
    """
    Interface to interact with a motor.
    """
    def __init__(self, motor_controller: MotorController):
        self.__controller = motor_controller

    def set_speed(self, speed: int):
        self.__controller.set_speed(speed)

    def get_speed(self) -> int:
        return self.__controller.get_speed()

    def turn_on(self) -> None:
        self.__controller.turn_on()

    def turn_off(self) -> None:
        self.__controller.turn_off()

    @property
    def on(self) -> bool:
        return self.__controller.is_on()

    @property
    def driving(self) -> bool:
        return self.__controller.is_driving()

    @property
    def off(self) -> bool:
        return self.__controller.is_off()
Enter fullscreen mode Exit fullscreen mode

To use the Motor with a mocked implementation of the motor controller, we instantiate the Motor by injecting a Mock into the interface.

mock_motor_controller = MockMotorController()
motor = Motor(motor_controller=mock_motor_controller)
Enter fullscreen mode Exit fullscreen mode

The importance of validating mock assumptions

If you are able to validate that the mock behaves the same way as the real motor controller under the conditions you put it under, you can be highly confident that the rest of your software will behave as expected when interacting with the real motor controller.
In reality, you should even be able to use your mock hardware in system tests to validate the rest of your software, as the parity between the mock and the real hardware has been validated. Most of any remaining problems that occur will be related to hardware reliability or defects, something you cannot prevent in software, only guard against, which is something that tests related to functional requirement often address.

In the state machine we've mocked up in our example, we would at the very least want to validate the following assumptions against hardware by writing tests:

  • The motor can be turned off from the Driving state
  • When the speed is set higher than 50, the motor controller sets the speed to 50
  • When the speed is set lower than 0, the motor sets the speed to 0
  • Turning the motor on from the On state does not change the speed

Preferably we would validate that each of the transitions in the state machine behaves as expected with real hardware, but this is not always possible or practical.

How to write hardware tests

Hardware tests are simply integration tests that validate the interaction between software and hardware components. So the same principles apply. Continuing with our motor example, here's a simple test.

def test_motor():
    motor = Motor(motor_controller=RealMotorController())
    motor.turn_on()
    assert motor.on
    motor.set_speed(45)
    assert motor.speed == 45
    motor.turn_off()
    assert motor.off
    assert motor.speed == 0
Enter fullscreen mode Exit fullscreen mode

Finding and testing functional requirements

When writing hardware tests (or any integration / system test for that matter), you should also carefully consider what the functional requirements for the component are so that you are can devise a strategy to validate them. More often than not in the realm of software this part is overlooked or boils down to meeting specific latency goals on a service or usability requirements in a UI.

The problem however becomes exaggerated in environments where the product itself is an embedded system. Here functional requirements become a crucial part of the product's success. An example of such a functional requirement could be “the motor must be able to drive at top speed for no less than 5 minutes without running out of battery”. Having this requirement now allows us to validate business requirements and goals with a simple (but long running) test.

def test_motor_can_drive_for_five_minutes_at_top_speed():
    motor = Motor(motor_controller=RealMotorController())
    motor.turn_on()
    motor.set_speed(50)
    start_time = time.time()

    while time.time() - start_time < 5 * 60:
        assert motor.speed == 50
        time.sleep(1)

    motor.turn_off()
    assert motor.off
Enter fullscreen mode Exit fullscreen mode

With a few lines of code we have now saved ourselves a tedious manual testing process and are able to validate that the hardware meets our business goals.

System testing

Having tested individual units, their integration, and hardware interactions, the final step is to validate the entire system's behavior through comprehensive system testing.

The goal of system testing

System tests validate the entire system's behavior against the requirements. They are performed in an environment that closely mirrors production. If all of the other testing stages have been done correctly, system tests should be a breeze. If you have done your due diligence in writing unit tests, integration tests and hardware tests, you should be able to write system tests that are very high level and only test the most critical paths of your software. This is because you have already validated the behavior of your software at a lower level, and you are now only interested in validating that the software works as expected when all the components are put together.

System test diagram

TLDR

System testing allows you to see the big picture, validating that all components work together as intended. It reinforces your ability to think critically about end-to-end workflows and to anticipate and mitigate potential issues before they arise.

How to write system tests

Having already structured your code in a way that allows you to swap out components without changing the rest of the software, you can now easily swap out the mocks with real components or vice versa and simulate the behavior of the components in a controlled environment as a whole system. You could even use the same tests for your system against real components and mocks. See the example below.

import os
import time
import pytest


@pytest.mark.parametrize(
    'motor_controller', [
        pytest.param(
            MockMotorController(),
            marks=pytest.mark.skipif(
                os.environ.get("MOTOR_CONTROLLER_TYPE", "") != "MOCK",
                reason="Motor controller is not set to MOCK",
            ),
        ),
        pytest.param(
            T16SRMotorController(),
            marks=pytest.mark.skipif(
                os.environ.get("MOTOR_CONTROLLER_TYPE", "") != "T16SR",
                reason="Motor controller is not set to T16SR",
            ),
        ),
        pytest.param(
            T19LRMotorController(),
            marks=pytest.mark.skipif(
                os.environ.get("MOTOR_CONTROLLER_TYPE", "") != "T19LR",
                reason="Motor controller is not set to T19LR",
            ),
        ),
    ]
)
@pytest.mark.parametrize(
    'steering_controller', [
        pytest.param(
            MockSteeringController(),
            marks=pytest.mark.skipif(
                os.environ.get("STEERING_CONTROLLER_TYPE", "") != "MOCK",
                reason="Steering controller is not set to MOCK",
            ),
        ),
        pytest.param(
            AK310SteeringController(),
            marks=pytest.mark.skipif(
                os.environ.get("STEERING_CONTROLLER_TYPE", "") != "AK310",
                reason="Steering controller is not set to AK310",
            ),
        ),
        pytest.param(
            AK420MotorController(),
            marks=pytest.mark.skipif(
                os.environ.get("STEERING_CONTROLLER_TYPE", "") != "AK420",
                reason="Steering controller is not set to AK420",
            ),
        ),
    ])
def test_system(motor_controller, steering_controller):
    """
    Tests the system with any combination of motor and steering
    controller that is not skipped by the parametrization 
    conditions
    """
    # Configure the components of the system
    motor = Motor(motor_controller=motor_controller)
    steering = Steering(steering_controller=steering_controller)

    # Set the goal of the system
    system = System(motor=motor, steering=steering)
    assert system.coordinates == (0, 0)
    system.drive_autopilot(
      target_coordinates=(100, -300), 
      timeout_seconds=60
    )

    # Wait for the system to achieve the goal
    while not system.arrived() and not system.timeout():
        time.sleep(1.0)

    # Validate that we've arrived at the specified goal
    assert system.coordinates == (100, -300)
Enter fullscreen mode Exit fullscreen mode

Conclusion

I hope this article has provided you with a fresh perspective on the importance and benefits of writing tests. By focusing on various types of testing, from unit tests to integration, hardware, and system tests, we have explored how each layer contributes to a robust, maintainable codebase. Testing is not just a box to tick for meeting requirements or passing CI checks. It is a crucial part of software development that enhances code quality, ensures functionality, and ultimately makes you a better programmer.

Writing tests forces you to think critically about your code, leading to better design decisions and more reliable software. By integrating testing into your development process, you catch bugs early, prevent regressions, and make refactoring and maintenance significantly easier. Remember, good tests are about much more than just validating logic, they teach you to write better code.

Hopefully, this article has motivated you to prioritize testing and equipped you with practical insights to improve your approach.

Happy testing!

Top comments (0)