DEV Community

Tracy Gilmore
Tracy Gilmore

Posted on • Edited on

The developer's refactoring safety net

I have written before on the value I place on a project having a well developed collection of unit tests. I have also stated I think it is the responsibility of every software engineer to prove their code works as they intended by providing a unit test with at least 80% coverage; and to do so at the same time as writing the unit, not later when time permits because it never does.

I am aware that not everyone shares my opinion, and I frequently hear arguments like:

  • “Unit tests do not deliver value for the client” or “that’s not what the customer is paying for”
  • “I am a developer/code, paid to write code, not a tester”
  • “Unit tests are unnecessary because we have end-to-end tests that run every night”

In my opinion, such arguments not only betray the developer as a hacker and not a Software Engineer but such actions undermine the profession. With the ever increasing reliance our society has on reliable software, our profession should be aiming to keep the customer happy through delivering high quality products, not quick and dirty code.

“Unit tests do not deliver value for the client”

With the drive towards automated testing, DevOps and the frequent shipment of updates, we appear somehow to have forgotten the need to demonstrate our product (code) satisfies an acceptable minimal level of Quality Assurance (QA), even before it enters the build pipeline or is subject to the scrutiny of testers. Never mind potentially shipping faulty code, you don’t want such code to even be demonstrated to the customer.

When faced with a client focussed on getting the biggest bang for their buck, I am sure they still want a quality product. They do not want to hear of how the end-users (who are highly likely to be their customer base) failed to use the application because of a bug that should have been detected by the test team, or worse, should never have escaped the developer.

The longer your time in this profession the more you come to realise you are fallible, and we all have bad days. That is when some poor code gets written or completion of a function gets forgotten. However, there is no reason (even with time pressures) to release substandard code.

To paraphrase “It is better to seek forgiveness (for delaying the release of a feature) than to request permission (to write the unit tests later that would have discovered it)”. Even when a project is employing an agile delivery methodology, the later a bug is discovered the more costly it is likely to fix. If for no other reason than the waste of testing that has already been performed and will need to be redone. The impact of a bug in production code could have an operational impact on the business that could run into millions.

“I am a developer, paid to write code, not a tester”

I have to agree that Software Engineering and Test Engineering are distinct disciplines requiring very different mindsets. Unfortunately, not all teams are large enough to have the luxury of having different people for these roles and very often developers have to “wear two hats“, and switch their mindset accordingly. However, also stated in my previous post, unit tests are less about testing and more about;

  1. a developer proving their code (unit) performs as they had intended.
  2. providing other developers with confidence that any changes they are making are not (or intentionally are) breaking the unit in unexpected ways.

Unit tests also serve as a manual for developers integrating your unit with theirs. It informs them of how it behaves and the interfaces it exposes. Unit tests are a developer tool, which is the key reason why unit tests should be written alongside the development of the unit and why they need to achieve a good level of test coverage.

It might just be me, but I am firmly of the opinion that our clients do not pay us to just cut code. They pay us to solve business problems by delivering quality solutions in software form.

“Unit tests are unnecessary because we have end-to-end tests”

They might not be full end-to-end but, by definition, integration tests involve far more "moving parts", at least two, than unit tests; that should be as autonomous as possible. This can involve considerable preparation and resources, and likely to take many minutes to complete. Far more than running a batch of unit tests.

Integration (and e2e) tests very often focus their attention on exercising happy path scenarios, only occasionally testing the alternative (error state) paths that the end-user might encounter. They seldom exercise every possible path through every function of the units engaged in the task, and they should not. That would be unjustifiably costly in terms of preparation and execution time.

However, relying on integration tests to detect development bugs is unreliable and too late in the development process. Also, using integration tests in favour of unit tests denies developers access to the valuable resource described above.

My scenario

Presented with a sizable operational code base, but with no access to any of the original developers, I commenced by reviewing the source code. I also defined a set of coding and testing conventions for this section of the project.

It rapidly became apparent the product had been developed by several engineers and their primary skill set was different from those needed for the code base. Also, they were failing to take advantage of;

  • tools to help them maintain a consistent development style.
  • unit tests with sufficient coverage and in many places any unit tests at all.
  • code reviews against a defined standard, or set of conventions, by someone experienced in the skill set employed by the code base.

The recommendations from reviewing the entire code base included:

  • Preparing and maintaining coding/testing conventions for the project, which was in place by then.
  • Engage styling and linting tools such as Prettier and ESLint/TSLint.
  • Establish a collection of unit tests for every source code file with a client approved minimum coverage level (recommended to be at least 80%[1].
  • Engage tools such as TypeScript and SCSS but by this time the code base was quite mature so the return on investment of doing this was questionable. The effort required to retrofit these tools was disproportionate to the potential gain (even long term).

Actions taken

The first thing to do was to apply a common coding style across all the files using Prettier. This would have a significant impact on file syntax and size but should have none on the functional. The changes would be to the styling of the source code files and not a refactoring of the code itself.

Next, I commenced addressing the lint issues that split into two groups, bugs and (a phrase I quite dislike but I suppose that it the intent) “code smells”. Assuming the bugs to be worse (and were fewer in number) those are what I tackled first.

The bugs were actually quite obvious and relatively easy to fix; issues such as removing commented-out code. All the way through, the unit tests were executed in the hope they would identify any breaking changes but, all done, the code base was built and successfully integration tested.

It was in addressing the “code smells” when it all started to unravel. The refactoring was far more involved and widespread. The unit tests did not flag any breaking changes but, given the significance of the changes, we initiated an integration test cycle before checking changes into the code base.

Bang! For some reason, yet to be uncovered, the integration tests failed, which not only highlighted an unexpected breaking change but also a hole in the safeguard that should have been provided by the unit tests.

Lessons learned

  1. Write complete unit tests at the same time as writing the unit.
  2. Make sure you have a complete set of unit tests before commencing refactoring.
  3. Maintain the unit tests whenever the unit changes; a bug fixed, an interface extended or a feature revised.

[1] It is questionable if 80% coverage is an industry standard but it can be argued the effort required to produce unit tests is subject to the Pareto Principle (also known as the 80/20 rule). This suggests that 80% coverage should be achievable with a quarter of the effort that would be needed to achieve the remaining 20%.

Top comments (0)