DEV Community

Cover image for Automation metrics: some problems
Rodrigo Matola
Rodrigo Matola

Posted on • Edited on

Automation metrics: some problems

“What is not measured is not managed”

This phrase is one of the most used when justifying the implementation of metrics in a project. another is

“Against data there are no arguments”

A big problem I see in these sentences is that metrics, without interpretation, are just numbers. And numbers alone have no obligation to reality.

A great example to illustrate what I mean is the Spurious Correlations website. This site comes to absurd conclusions, as the increase in consumption of mozzarella is correlated with Civil Engineering doctorates awarded, or that decreasing the consumption of margarine, also decreases the number of divorces in Maine.

What I want to say in this introduction is that numbers, metrics and even statistics can lead us to very wrong conclusions about our process/product.

Now let's get to the main subject.

The 3 numbers

Image description

The 3 numbers are not a consolidated technique for quality metrics. The 3 numbers is a metric that, all of a sudden, a stakeholder asked all QAs to report. These 3 numbers were intended to inform managers about the evolution of test automation:

  • number of mapped scenarios
  • number of automated scenarios
  • automated scenarios in the last week

These numbers were a weekly report that we should send, as the company is "goal oriented" and each manager must achieve one.

Let's now enumerate some problems I see in these numbers.

Only numbers don't mean anything

Image description

Numbers are just collected data. What you look for when collecting data is information.

Information is the ordered and organized data so that it conveys a comprehensive message within a given context. A set of data that conveys knowledge is information.

That is,

information = data + context + interpretation
Enter fullscreen mode Exit fullscreen mode

A simple example: I pay $2000 in rent. With this data alone, it is impossible to say whether I am paying too much or not.

If I live in a studio in Osasco, a São Paulo nearby city, $2000 is an absurdly high value. If you go to a 3 bedroom apartment in Moema, it's basically for free.

For QAs, especially if part of an agile team, the Sprint may have required tasks other than automation:

  • make API mocks due to instability of the test environments or due to the difficulty of getting mass data;
  • refactor the test code, because there were some modifications in a functionality or correcting flaky scenarios
  • work on the continuous integration pipeline, or helping with it;
  • it was a “manual” Sprint due to the urgency to launch a feature, to not miss the “time to market”, and automation would take more time than needed to launch

These are just a few examples where the 3 numbers, without context, could lead to misinterpretations of why automation didn't progress or was slower in a Sprint.

It will never be 100% automated

When we talk about test scenarios, we usually mean Gherkin and BDD. I argue that, as the name is Behavior Driven Development, ALL behaviors in the software must be described, not just those that will be automated.

As a result, scenarios that can only be done manually, or that the automation and maintenance effort is very high, will not be included in the statistics. An example is putting an iPhone simulator in airplane mode, to test how the application responds to lack of internet, or turning on bluetooth in an Android emulator.

In addition, some scenarios, even containing automation code, will not run. For example a registration feature.

Scenario: Enter the registration area
Given the user is on Home page
When access the registration area
Then the form should be on screen
Enter fullscreen mode Exit fullscreen mode

and

Scenario: Successfully register
Given the user is on Home page
When the user make the registration
Then a success message should appear
Enter fullscreen mode Exit fullscreen mode

The first scenario can be discarded and the second needs to run, because in order to register it is necessary to display the form first.

In this case, we will have two scenarios mapped, but the report will contain only one executed. If we extrapolate, we will only have 50% of the tests running automatically, but 100% of the scenarios covered.

To achieve the goals, we will hire "automators"

In an agile team, having people to perform only one task is not "healthy". In this case, having QAs and "automators" can (will?) generate dysfunctions.

For automators, a detachment from quality: “I'm here just to automate”. For the manual testers, a feeling “if you automate everything I could lose my job”, or “I also want to automate, why don't you teach us?”, causing demotivation.

At some point, this will create cliques, decreasing interaction between people and even generating hard feelings between them. As a result, quality is no longer a priority.

To put the above into context, the company, which only had manual QAs, hired an automation consultancy company to speed up the process.

Conclusion

My objective with this article is to check if we are using only data, or if we are using information to make decisions.

As we can't just consider the rental price for renting a house, we also shouldn't rely on just one or “three numbers” to measure an evolution.

Tip: managers, participate more in the development cycle. Get close to people, talk, find out what each person is doing and developing. They may have other even better numbers.


"Tell me how you measure me and I'll tell you how I'll behave"

This sentence fits the situation very well. But that discussion is for another text!


And you? Do you have a story of imposed metrics for your team? Comment here!

Top comments (0)