DEV Community

Cover image for Writing High Quality Tests to Foster Abstractions to Evolve
Mohsen Bazmi
Mohsen Bazmi

Posted on

Writing High Quality Tests to Foster Abstractions to Evolve

In the intricate world of software design, distinguishing between stable and volatile components is essential. Our tests should rely on stable abstractions, yet the iterative nature of modeling often reveals instability in what was once deemed rock-solid. However, modifying these abstractions can be risky as tests may not adequately safeguard them. These series of articles aim to address this challenge by leveraging some automation testing patterns and exploring strategies for crafting automated tests that enhance the safety of the refactoring process, despite the evolving nature of software design.

In these series of articles, we will delve into various approaches for building maintainable tests. We’ll discuss the advantages and disadvantages of each method. Our journey begins with an illustrative test case for a poorly designed system. Instead of immediately revamping the system’s design, we will investigate various methods to adjust the test that safeguards our refactoring initiatives.

As we progress, we will discover techniques to optimize the scope of units under test. At the same time, we’ll gain insights into the factors we need to control in order to safeguard the quality of tests throughout our journey. Smells to sniff and goals to protect. The following image illustrates them.

Goals and smells to be aware of while refactoring to cover larger models

\newline

We'll start with deliberately using a poorly designed model and a subpar test surrounding it, we'll investigate diverse approaches to tackle the issues.

To kick things off, let's consider a simple scenario: scheduling a doctor's appointment with a naive unsophisticated domain model.

public void Successful_Appointment()
{
    //Arrange

        //Load the doctor
        Doctor doctor = new Doctor();
        doctor.Name = "David Smith";
        doctor.HourlyRate = 80;


        //Load the patient
        Patient patient = new Patient();
        patient.Name = "Maria Garcia";
        patient.Age = 65;

    //Exsercise
        var appointment = new Appointment();
        appointment.Doctor = doctor;
        appointment.Patient = patient;
        var time = DateTime.Now.AddDays(2);
        var fee = 80;

        appointment.Make(doctor, time, fee);

    //Assert: 
       Assert.Equal("David Smith", appointment.Doctor.Name);
       Assert.Equal("Maria Garcia", appointment.Patient.Name);
       Assert.Equal(DateTime.Now.AddDays(2), appointment.Time);
}

Enter fullscreen mode Exit fullscreen mode

Despite the issues in the test code, it is clear that the domain model lacks a robust design. However, at this moment, we choose not to delve into those issues. Our current domain knowledge informs us that we should defer further design considerations.

Now, the critical question arises: In this specific scenario, does the test effectively protect the model, allowing us to refactor it later?

“Whenever I do refactoring, the first step is always the same. I need to ensure I have a solid set of tests for that section of code. The tests are essential because even though I will follow refactorings structured to avoid most of the opportunities for introducing bugs.”
— Martin Fowler

How confident can we be in refactoring the domain model? For instance, can I modify the model to be used as follows without breaking the test mentioned above?

Doctor doctor = new Doctor("David Smith", 80);
Patient patient = new Patient("Maria Garcia", 65);
patient.ScheduleAppointment(DateTime.Now.AddDays(3));
Enter fullscreen mode Exit fullscreen mode

Clearly not. Smallest change to the interface will fail the test. Even if the system's behavior stays the same.

The problem of our test is known as Interface Sensitivity (XUnit Patterns).

Besides Interface Sensitivity, the test doesn't document the system's behavior transparently.

A quick step towards addressing both problems is to extract the test steps into separate methods. These methods can also be extracted into a separate class (named Appointment), and express the Ubiquitous Language (DDD) in the methods’ names.

public void Successful_Appointment()
{
    appointments
        .Given(David_Smith_is_a_doctor())
          .And(Maria_Garcia_is_a_patient())
        .When(Maria_Garcia_makes_an_appointment_with_Dr_Smith())
        .Then(Maria_Garcia_should_have_an_appointment_with_Dr_Smith());
}
Enter fullscreen mode Exit fullscreen mode

We extracted four methods.

  • David_Smith_is_a_doctor()
  • Maria_Garcia_is_a_patient()
  • Maria_Garcia_makes_an_appointment_with_Dr_Smith()
  • Maria_Garcia_should_have_an_appointment_with_Dr_Smith()

These are called Test Utility Methods (XUnit Patterns) and the class that contains them (Appointment) is called a Test Helper (XUnit Patterns).

The methods can be reused in multiple test cases.

Each Test Utility Method is a single source of truth. It implies that whenever a change breaks one or more tests that use our new Test Utility Methods it's much easier to find and update them accordingly to go back green again (pass all the tests that are impacted by the change). So this refactoring made our test more maintainable for two reasons.

  • We made a more supportive Safety Net.
  • In terms of documentation, the Test Utility Method names explicitly conform to the Ubiquitous Language. So the methods are easier to find whenever a change is required.

So can we call this test an executable documentation? I don't think so. 😔 The maintainability can also be improved further. We will learn more on them later.

So far, we learned to enhance the quality of our safety net by extracting Test Utility Methods and Test Helpers. This was the first refactoring and the simplest one in our journey. With a few more minor adjustments, this kind of refactoring can serve as a last-resort solution 😁. But let's see what exactly is wrong with it first.


Tests as Documentation

Ideally, tests should have the following properties to be considered as executable documentation.

  • Ubiquitous Language (DDD): They should conform to the Ubiquitous Language (DDD), so that it's easily understandable by non-technical domain experts.
  • Proof of Claims: How much the test proves what it claims? The test's reader should transparently see the effect of API calls on the system. This also simplifies the debugging process in the event of unexpected failures.
  • Usability Guide: Tests should contain information about how the system under test can be used, effectively serving as a form of live documentation.
  • Easy to Grasp: All of the properties above should be easily graspable at first glance. Tests should be easy to read and understand, Even for someone who is not familiar with the codebase. Easy to understand tests are also more maintainable.
  • Always-Up-to-Date Documentation: Changes to the production code should reflect into the tests with minimum hassle. This can be achieved using refactoring tools or other automated methods to ensure that the tests remain up-to-date as the system evolves. This is crucial for maintaining the accuracy of the documentation provided by the tests.
  • Expressive Result Verification: Testing procedures should strike a balance, avoiding excessive or insufficient verification. The focus of verification should be meaningful and aligned with business requirements. Transparent examination of API call results through explicit assertions.
  • Minimal Distraction: Tests should minimize the noise for readers. They should only contain relevant information.

The properties try to shape a common image. They picture what an ideal executable documentation looks like. Some of them may sound overlapping. In fact, they try to complement each other. They are different dimensions of the same concept.

We we are not prescribing a one size fits all solution. Not all properties can be perfectly applied into a single test.

Let's take a look at our test again.

public void Successful_Appointment()
{
    appointments
        .Given(a => a.David_Smith_is_a_doctor())
          .And(a => a.Maria_Garcia_is_a_patient())
        .When(a => a.Maria_Garcia_makes_an_appointment_with_Dr_Smith())
        .Then(a => a.Maria_Garcia_should_have_an_appointment_with_Dr_Smith());
}
Enter fullscreen mode Exit fullscreen mode

While Ubiquitous Language is clearly expressed in this test, It still cannot be used as an average documentation. The following chart illustrates how the qualities apply to this test. 🧐

Qualities of the last test, as a documentation

Let's try to parameterize the Test Utility Methods to see if it improves the test's quality as documentation.

public void Successful_Appointment()
{
    const string Dr_David_Smith = "David Smith";
    const string Maria_Garcia = "Maria Garcia";
    const string time = "2024-10-10";

    appointments
      .Given(a => a.Register_Doctor(Dr_David_Smith))
        .And(a => a.Register_Patient(Maria_Garcia))
      .When(a => a.Make_Appointment(Dr_David_Smith, Maria_Garcia, time))
      .Then(a => a.Appointment_should_exist_for(Dr_David_Smith, Maria_Garcia, time));
}
Enter fullscreen mode Exit fullscreen mode

The Test Utility Methods might pass additional parameters to the production APIs, however, we only passed what matters in this scenario, and let the utility methods to decide on the rest. By only passing the parameters that matter in this scenario to the Test Utility Methods we made the causal relationship between the steps (Given, When, and Then) more transparent. It's usually a good practice to paramerize the test method as well.

[InlineData("Dr David Smith", "Maria Garcia", "2024-10-10")]
public void Make_appointment_for(string Dr_Smith, string  Maria, string at_13)
{
  appointments
    .Given(a => a.Register_Doctor(Dr_Smith))
      .And(a => a.Register_Patient(Maria))
    .When(a => a.Make_Appointment(Dr_Smith, Maria, at_13))
    .Then(a => a.Appointment_should_exist_for(Dr_Smith, Maria, at_13));
}
Enter fullscreen mode Exit fullscreen mode

By parameterizing the test, this test explicitly expresses the data it depends on. The Test Utility Methods are also more reusable and they convey their intention more transparently. Besides, it can proof what it claims better by adding different variations of data.

[InlineData("Dr John Davis", "Jane Miller", "2020-03-05")]
[InlineData("Dr Sam Brown", "Tony Jones", "2020-03-05")]
[InlineData("Dr David Smith", "Maria Garcia", "2024-10-10")]
public void Make_appointment_for(string Dr_Smith, string Maria, string at_13)

Enter fullscreen mode Exit fullscreen mode

The BDD framework does not sound necessary any more. We can remove it.

[InlineData("Dr David Smith", "Maria Garcia", "2024-10-10")]
public void Make_appointment_for(string Dr_Smith, string Maria, string at_13)
{
  //Given
  appointments.Register_Doctor(Dr_Smith);
  appointments.Register_Patient(Maria);

  //When
  appointments.Make_Appointment(Dr_Smith, Maria, at_13);

  //Then
  appointments.Appointment_should_exist_for(Dr_Smith, Maria, at_13);
}

Enter fullscreen mode Exit fullscreen mode

The refactored version of our test partially documents the system's functionality, but it's yet to reach the level of ideal documentation. Let's see how this refactoring affects the chart.

Qualities of the last test, as a documentation

The diagram illustrates a an optimistic language for assessing the quality of our test as documentation. However, it's not the most straightforward language for warning potential quality deficiencies in software. In software industry literature, a more skeptical yet direct language is employed. In this context, the word smells is used to warn symptoms of potential quality shortcomings. So let's pause and learn more about test smells.


We saw a simple example of how simple safety nets can protect our evolving design. But it's not always that straightforward. Our initial BDD-style test wasn't cutting it in terms of quality. We remedied the quality issues, and turned our BDD style test into a solid fallback plan.

Now, it's time to take things slow and mix our exploration with some theory and discussion to tackle the issues more directly. We're starting from scratch. We want to stick with the imperfect domain model design for as long as possible, so we can explore different problems and figure out solutions to build strong yet flexible safety nets.

Introduction to Test Smells
Test smells are indicators of potential issues or weaknesses in our tests. Similar to code smells in production code, which signal areas requiring refactoring or enhancement, test smells highlight areas where tests could be improved to ensure better quality. They're used as cues for possible deficiencies in the design or implementation of the test suite. Some smells lead to other smells.

Smells can lead to other smells

And a few smells may be called by different aliases.

Smells may have aliases

The purpose of this guide is to keep it rather example-based and practical. Let's look for a couple of those symptoms in our original test.

public void Successful_Appointment()
{
    //Arrange

        //Load the doctor
        Doctor doctor = new Doctor();
        doctor.Name = "David Smith";
        doctor.HourlyRate = 80;


        //Load the patient
        Patient patient = new Patient();
        patient.Name = "Maria Garcia";
        patient.Age = 65;

    //Exsercise
        var appointment = new Appointment();
        appointment.Doctor = doctor;
        appointment.Patient = patient;
        var date = DateTime.Now.AddDays(2);
        var time = DateTime.Now.AddDays(2);
        var fee = 80;

        appointment.Make(date, time, fee);

     //Assert
        Assert.Equal("David Smith", appointment.Doctor.Name);
        Assert.Equal("Maria Garcia", appointment.Patient.Name);
        Assert.Equal(DateTime.Now.AddDays(2), appointment.Time);
}

Enter fullscreen mode Exit fullscreen mode

The most obvious issue with this test is that it's not easy to understand it at a glance. This smell is called Obscure Test(XUnit Patterns). But that is a general problem. Lots of smells can lead to Obscure Tests. Which ones are the root causes in our test?

..... . . . . . \newline \newline \newline \newline \newline



To be continued...

Top comments (0)