DEV Community

Pavol Rajzak
Pavol Rajzak

Posted on

Why writing tests for legacy code is a good thing

You are working as a software developer with application that was developed when hoola-hooping was still cool and unit testing was considered to be practice of unskilled noobs. You have been assigned to change a behavior of simple function that prints "Hello world" at the push of a button.

You start digging into the code and discover awkward constructs and unconventional syntax. You are trying to get to the point of the hakunamatata() method, but you are clueless. What to do with this piece of code? It is deeply burned into the core, without any documentation.

Working with legacy code is something that must be mastered over the time. I've encountered lots of situations that are similar to the scenario above. It is hard to tell what is the best way to approach it, but here is my #1 tip:

Unit test the s#&% out of it.

But why?

1. You are forced to reason about the code itself

You learn the ways of the ancient haxors, that originally developed the system. You understand patterns and conventions they used. More than often this is surprisingly easy to pick up. Also, while writing the tests, you will be the "user" of their code. You will use the public API of the application and verify that it is used as intended.

2. You can create backlog for technical debt

With unit tests, you can find a lot of really nasty things. Since unit testing should be done together with development, most of the untested code will also be written in untestable way. You will discover exceptions that don't make sense, conditions that can never happen, unsatisfied or illegal method arguments, functions the size of class and so on. It is very good to mark these findings down and hold your urge to refactor it right away. First, you will need to gather some coverage.

3. You will have proof that the legacy code works...

...or - in some scenarios - it doesn't. And if you are like me, you will strive to cover almost every possible case and condition. This is the hardest part, because you will have to come up with a way of writing test code that does not resemble production code (e.g. does not suck). You will probably have several smaller unit tests that are testing one large method or will have to use some advanced mocking frameworks in order to bend the production code. At the end you will have a test suite that will give you a good start to refactor the production code.

4. You will have a safeguard for changing business logic

This condition can be simplified! But are you sure? With a good test coverage you are (-ish, things should be handled with care). Now you can get rid of hakunamatata() and create a nice souvenir T-Shirt (true story).

But how?

You are pushed for deadlines and don't have time for this nonsense? Well that's too bad. If there're no resources for this kind of work, there's nothing you can do. But every skilled project manager (or any stakeholder) should know that rolling up the technical debt is a bad thing that backfires. Always.

The good thing is, that if you are using some tools for static analysis of the code you can report a progress over time. You will get numbers that are presentable to your stakeholders and at the same time have a good feeling about your work (testing & refactoring is like eating a bar of chocolate!).

So, that being said, unit testing legacy code requires a tremendous effort. It can be split, it can be done iteratively, but it still is a hell of a work. Therefore, you should start with it as early as possible. Start with the most understandable parts of the code, the most critical or the most used ones. Just start.

Top comments (10)

Collapse
 
n_develop profile image
Lars Richter

Nice post. I'm a big fan of unit testing. And I really like working with legacy code. "Working effectively with legacy code" by Michael Feathers is one of the best references about this topic.

I'm looking forward to your other posts.

Collapse
 
tomkoptel profile image
tomkoptel

Hi Pavol. Thanks for post. You have mentioned static analysis tool. Can you advice a good setup to measure progress of technical debt resolution?

Thanks!

Collapse
 
rapasoft profile image
Pavol Rajzak

First of all, if you have large codebase which hasn't been measured for technical debt I would recommend first setting up SonarQube to have a good baseline for later measurements. It doesn't make sense to setup quality gates that would fail a build, since this can be done only when the codebase is cleaned up.

So, setup SonarQube, create first analysis and then try to make it a habit within development team to check for technical debt periodically (e.g. after every sprint, or even more frequently). SonarQube is quite good in defining the most critical flaws, so it's always good to start with that. Sometimes it gives a false-positives, but these can be easily resolved (within SQ or in code, for example as annotation in Java).

Collapse
 
tomkoptel profile image
tomkoptel

Thanks for suggestion :)

Collapse
 
markschweiger15 profile image
Mark Schweiger

It's the only way I've seen to refactor legacy code with confidence and without breaking anything.
Are you planning on writing a series about this subject?

Collapse
 
rapasoft profile image
Pavol Rajzak • Edited

I've written so far two follow-up articles about this topic:

Feel free to leave comments or questions! :)

Collapse
 
rapasoft profile image
Pavol Rajzak

Actually, I can, since this is my current assignment. I will prepare some posts! :)

Collapse
 
chrisvasqm profile image
Christian Vasquez

Awesome post! Right after "Unit test the s#&% out of it." I knew it was going to be a good one 😁. I'll be looking forward for more of your posts.

Collapse
 
rapasoft profile image
Pavol Rajzak

Thanks! That motivates me even more to write more posts :)

Collapse
 
mogery profile image
Gergő Móricz

Amazing article!