DEV Community

Jesse Phillips
Jesse Phillips

Posted on

Testing Doing Harm

I have started wonder what are some general indicators that a testing effort should be reevaluated as it is likely the testing is doing harm. Flacky tests would be an easy goto but I thought there had to be more to what I've seen. This is written with automation in mind but may be applicable to manual testing.

As I tried explaining some testing challenges to a coworker I realized the key indicator of harmful tests. At a high enough view a test consists of an input and an expectation. This gets complicated when the environment is part of the input, or when multiple expectations exist for an input.

If you're unable to determine what the test is expecting as input or expectations it would likely be a good to improve, maybe it is causing some flakiness.

The primary concern is when you're input or expected results are continuously in flux. It does not really matter if this is changing the environment or a json file in the test framework. I need to be clear I'm talking about the designated inputs and results here by giving some examples.

Sometimes you will have failures because your validation relies on some form of content, text of a button or link. This can be fine for a lot of tests, but if the text is changing from one release to another then it is time to stop those changes or apply a different validation. (content may need testing but there needs to be clarity of ownership and what is tested)

I often mock out and test with APIs. Sometimes these change, say from xml to json. I use the native API format to drive my test. This would mean my input would be changing from xml to json. However I don't concern myself with it because this isn't a constant change. I can mostly rely on the testing to report back without my intervention. If however the structure really is in constant flux it is important to optimize for this change assuming the initial development exploration is finished.

Why is constant changing input and expectation so bad? The tests you write are intended to identify failure to meet expectations across changes. If your expected result or input needs modified every release you need to evaluate if your test needs updated or if there is a regression. I see this leading to an optimized work flow for "fixing" the test by modifying it to result in passing.

Now in those cases the system is changing and so the tests can be looked at as providing insight into those changes. But I would wonder, is that extra insight? Should there not already be documented changes about this? Would it not be better time spent getting real insight into how the system functions, risk identification and setup tests that cover a different part of the system.

Top comments (11)

Collapse
 
alanmbarr profile image
Alan Barr

Over ambitious tests are a concern. They try to cover too much territory assume a lot of happy path and setup state and skip over a lot of the internals. Not typically worth the effort to maintain. Also any test that doesn't cover the risks that are truly at the heart of the problems with changing software.

Collapse
 
jessekphillips profile image
Jesse Phillips

I'm liking what you said but having a hard time identifying indicators.

  • Tests which setup a lot of state

Happy path testing and missing internal logic suggests to me insufficient testing and not bad tests doing harm.

I think tests that don't cover the heart of changing software is likely to surface by the indicator in my article, the explanation and inputs would change more frequently.

Collapse
 
alanmbarr profile image
Alan Barr

I think that what I mean to say is that a test could convey a false level of security and cause harm in that way.

Thread Thread
 
toastking profile image
Matt Del Signore

I think this is very dependent on how your software is written. Back end servers can be written in ways that are very testable. I've mostly seen UI automation tests as the ones that can convey incorrect levels of correctness.

Thread Thread
 
alanmbarr profile image
Alan Barr

I agree backends can also be written in convoluted ways as well.

Collapse
 
jtenner profile image
jtenner

I think that not understanding specifications is the leading cause of testing failures. I have plenty of those problems in nearly all of my test suites, and it was because I was too lazy to start with a specification.

I am not proud of this.

I became aware of this when I started taking ADHD medication, and I saw the real world consequences each time I decided to shirk my responsibility. In this wake-up process, I decided to redesign, from the ground up, a canvas framework that I had already implemented so that I could put my money where my mouth is. Soon, I'll be able to report back and be proud of my work. That is not this day, sadly.

Given that my specification happens to be the CanvasRenderingContext2D prototype specification, it's a lot easier for me to define nearly all of the unit and behavior tests up front. Of course, this is not the case for others.

In the case that business software must be designed through an exploratory process, it necessary that standard operating procedure must be derived from a prototype. There is no good alternative unless you buy someone else's software.

Once the prototype is done, a set of specifications can be written by observing the good and bad things about such a solution.

That's when the real testing begins. You're building an Iron Man of your idea, so put it in the ring with everything you have.

Collapse
 
toastking profile image
Matt Del Signore

This is why you should try to write hermetic tests. Unit tests should only test one piece of functionality.
Tests are supposed to also act as documentation of functionality. If the functionality changes they should break too, that way you know they need to change.

Collapse
 
jessekphillips profile image
Jesse Phillips

Your tests ideally should be considered immutable, if functionality changes then you no lo longer need that test and a new one created.

Now if you are doing this maybe you should not because that would be a change to the api contract and should get appropriate depreciation and tests should continue to pass.

That said being in QA I have not dived into writing tests at the unit level. I do isolated integration testing. I don't have a lot of say on the architecture and since I think curtain testability architectures can be harmful to readability don't think that is always the correct level to test at.

Collapse
 
steelwolf180 profile image
Max Ong Zong Bao

hmm...when your documentation and specs are aligned not which fail constantly in your production environment.

Collapse
 
jessekphillips profile image
Jesse Phillips

I am having a hard time identifying what claim your making.

Collapse
 
steelwolf180 profile image
Max Ong Zong Bao

Ahh... sorry I was in a rush to write it what I'm saying is testing does harm.

When your test cases are not following the project specification & documentation in the production environment.