DEV Community

zvone187
zvone187

Posted on

Creating integration tests for a backend legacy codebase

The problem with a legacy codebase is that it's often not well documented, you're just not familiar with it, and it can be time-consuming to make sense of the existing code.

Because of these reasons, working with a legacy codebase can be an energy draining task, especially when it comes to implementing an automated test suite.

In this blog post, I'll show you how can you cover the legacy codebase with tests and achieve 80%+ code coverage in just 30 minutes and how to hand off the creation of the test suite to your QA colleagues where they can cover the backend codebase with 100% coverage in a couple of hours just by manually testing the app on the frontend.

Refactoring legacy codebase

The Challenges of Legacy Codebases

A common challenge when working with a legacy codebase is understanding how the existing code works. In many cases, the original developers may no longer be available to help (or they even not might be able to help 🤦), and documentation may be...well, non-existing or just outdated, at best. As a result, you need to spend days, if not weeks, analyzing the code to determine its purpose, functionality, and potential issues.

In order for you to create an automated test suite, first, you need to understand what variables in the code you want to assert in the test suite. But that's an easy part. After that, you need to understand how all of those variables work and how to assert them.

Writing tests is a time-consuming task and that multiplies many fold if you are not familiar with the code. You may need to write hundreds or even thousands of tests to achieve some meaningful coverage.

Phew...just writing about this makes me sweat. 😥

What is Pythagora

Pythagora is an open source tool that creates automated integration tests by analysing server activity without you having to write a single line of code.

Basically, it's a record/replay tool, similar to VCR for Ruby, but on steroids and powered by AI (khm GPT khm).

Creating tests with Pythagora

To get Pythagora, you can just install with with npm:

npm i pythagora
Enter fullscreen mode Exit fullscreen mode

After that, you just run the Pythagora capture command and pass it the command you’re using to run your app. For example, if you’re running your server with npm run start, the capture command would be:

npx pythagora --init-command "npm run start"
Enter fullscreen mode Exit fullscreen mode

Ok, great. When you run this command, your app should start with Pythagora wrapped around it and listening for the incoming API requests and capturing server activity.

Now, to capture tests, you just need to make API requests. You can use Postman, cURL, or you can just open up the app in the browser (or a mobile device) and click around so that the browser makes API requests. As the API requests are coming in, you should see logs showing what requests did Pythagora capture.

You can do this as long as you like and once you’re done, just stop the server and you should see how many tests did Pythagora capture.

Now, when you want to run the tests, just add --mode test to the Pythagora command and watch your tests run.

npx pythagora --init-command "npm run start" --mode test
Enter fullscreen mode Exit fullscreen mode

Tests should run quite quickly and you’ll see the results in seconds.

Pythagora tests running

Getting from 0 to 80% code coverage in 30 minutes

Now that you know how to create automated tests for your legacy codebase, it’s time to get to work and cover the codebase with as many tests as you can.

To do that, the only thing you need to do is continue making those API requests and let Pythagora save them as tests.

If you want to see how much of your code is covered by tests, just add --full-code-coverage-report flag to the test command and it will generate a full code coverage report by NYC in the pythagora_tests folder.

npx pythagora --init-command "npm run start" --mode test --full-code-coverage-report
Enter fullscreen mode Exit fullscreen mode

The code coverage report shows the coverage for all files in your repo and, if you open a file, what lines are not covered. After looking into the report, just run the capture command again, and make the requests that will trigger the uncovered lines.

Here is a video of me covering a codebase with 150 tests and 80% code coverage in just 30 minutes:

Handing off the automated tests to the QA team

If you’re one of the lucky ones who has a QA team, with Pythagora, you can give them the ability to take time, think about edge cases and cover the entire codebase with tests.

I’d recommend setting up a QA environment where your app is run with Pythagora capture command so that the QA team can access it through the browser or a mobile device and make as many requests as they can. This way, within a couple of hours of them manually testing the app, you should have an entire test suite of automated tests ready to go.

GPT south park

Generating negative tests with GPT

Negative tests play a crucial role in validating the stability and reliability of a system by intentionally sending incorrect data or triggering error conditions to ensure the system can handle such scenarios gracefully. These tests are important because they help identify potential vulnerabilities, edge cases, and unexpected behaviors that could lead to application failures, data corruption, or even security breaches.

However, developers – we just don’t like spending time testing what we’ve built, especially thinking about all the different edge cases the entire day. Our QA colleagues do a much better job here – if you have them in your company.

If you don’t, Pythagora has integrated GPT that takes all tests you’ve captured and creates a set of negative tests from them. So, once you are satisfied with the test suite, just run the Pythagora command for generating negative tests and watch the magic happen.

npx pythagora --init-command "npm run start" --mode test --generate-negative-tests
Enter fullscreen mode Exit fullscreen mode

And just like that, it will supplement your test suite with a whole bunch of negative tests. NOTE: this is still in private beta since Pythagora is open source but let me know if you'd like to try it out and I'll give you access.

What happens under the hood?

If you really came this far with reading, I bet you’re wondering how Pythagora works under the hood.

It works by recording requests and responses to your application’s endpoints along with the server activity (database queries, requests to 3rd party APIs, etc.) that are happening during the execution of the request.

You can see all the data it captured in the pythagora_tests folder in the root of your project.

When you want to run the captured tests, Pythagora restores the state of the database in the temporary database pythagoraDb so that you can run tests from any environment regardless of what database it’s connected to. As for 3rd party APIs, Pythagora checks if the requests to the APIs are the same as in the capture and mocks the responses so that your tests are unaffected by external sources of data.

Also, Pythagora doesn’t just restore the state of the database, it also checks how is it changing during the API request execution. For example, if a request updates the database, Pythagora will check if the database is correctly updated.

If you like to dig even deeper, you can check out this video (16 mins) of tech deep dive about how Pythagora works, how tests look like, etc.

Conclusion

Alright, to sum up – in this post, we saw how Pythagora helps with building an automated test suite for a legacy codebase – a task you can finish up in just one afternoon. Here are a couple of main points to take out:

  • You can use Pythagora as a developer and create an initial test suite with 80% code coverage in just 30 minutes
  • From the test suite you created, Pythagora can, with GPT, create a set of negative tests
  • You can set up Pythagora on a dev environment and give the QA team control over creating a full test suite in just a couple of hours

I hope you enjoyed reading this post. If you have any questions or feedback, please let us know at hi@pythagora.io and to get updates about the beta features, feel free to add your email here.

Finally, if you read this far and would like to support us, please consider starring the Pythagora Github repository here – it would mean the world to us.

Top comments (0)