This week, in a setup for a CI/CD pipeline, I added unit and integration testing using Pytest to my Python CLI and utilized pytest-cov for generating a coverage report. As always, the merged commit for changes to the repo can be found here.
What is CI/CD?
CI stands for continuous integration and CD stands for continuous delivery. CI focuses on the foundations of creating new features and testing them rigorously, whereas CD focuses on what to do after code is added and tested. If you've ever worked in a DevOps environment, chances are CI/CD pipelines were something you worked closely with.
The Process
So, I knew CI/CD was going to be an inevitable beast to be tamed. Having done it previously, Unit/Integration testing is one of, if not the most important steps to a solid foundation for continuous deployment. Having never done it in Python this was a new experience, though because of the scale of my app, I figured it wouldn't be as bad as the first time I worked with it.
After looking into various libraries for testing, I decided to use Pytest because it seemed very easy to use and was somewhat similar to other testing libraries I've used previously.
I feel like I lucked out in choosing Typer to create my app because it has an entire testing library within it which came in great use with Pytest, which I only found out about after I attempted writing my first test.
It turns out that the way I was handling my Typer app made it difficult to actually call any CLI arguments programmatically. It was funny because it was a single line of code that prevented me from progressing, more specifically this:
app = typer.Typer(add_completion=False)
I can't remember why I even added add_completion=False
but it was breaking my tests, so I removed it and got to work.
The main idea behind unit and integration testing is adding as many small tests as possible to ensure every corner of your app is tested and 100% works as intended. So for my first test, I tested the -v
flag.
runner = CliRunner()
# test_main.py
def test_main_version_arg():
result = runner.invoke(app, ["-v"])
assert "Tiller Version: 0.1.0" in result.output
As shown in the first test, I wanted to have clear function names for tests (which will come in handy later). This will let me easily organize tests for different functionalities of the app. The runner
variable is a CliRunner which calls the CLI programmatically, which is extremely useful for what I need. Calling runner.invoke()
will call the app (my main function) and will pass the -v
flag to invoke the version flag. The 2nd line of the function uses the bread and butter of unit testing in Python, the assert
keyword. assert
essentially acts as an if statement, if the condition after it is true, the function worked as intended, but if it's false, it will raise an AssertionError
and a custom exit message could be set. This is essentially how all the tests will be developed.
The next problem arose when I tried to create a test for my output flag.
## Test Output Flag
def test_main_output_arg():
with runner.isolated_filesystem():
examples_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "examples")
shutil.copytree(examples_dir, "examples")
result = runner.invoke(app, ["-o", "output", "examples"])
assert os.path.isdir("output") is True
assert len(os.listdir("output")) > 0
The above code is the final version of the test, but I ran into a couple of issues when trying to create it. First off, when using the runner.isolated_filesystem()
function I had issues asserting parts of the output, specifically if the files were actually being copied over. After trying to use shutil.copytree()
I realized it was just throwing over a blank examples folder, so using good ol' os I was able to string together the absolute path that contains the script with the examples folder to create a way to copy the exact contents of the examples folder to out virtual isolated environment. isolated_filesystem()
was another life saver for testing as it let me create a temporary place to run my tests which wouldn't reflect in the actual folder of my app and is actually a part of the backend that fuels Typer, Click.
Pytest was great to work with and extremely simple to use, all I had to do was create a file that had a test_
prefix and then call pytest
in my CMD which ran every test and threw errors whenever one of them broke.
I wrote around 18 tests, made sure they all passed as expected, then installed pytest-cov
and was happy to see that I had 90% coverage, only missing out on a few lines that weren't functional code. After that, I updated CONTRIBUTING.md
to give users info on how to test and generate coverage reports, added both pytest and pytest-cov to my requirements.txt folder, and merged my branches.
A learning experience
In all honesty, I'm blown away by how fluently everything worked together. I've used Jest before for testing a microservice js app, and writing those tests was a pain. Sometimes it would just break for no reason, or some small config line was wrong, but with Pytest setting up CI/CD was amazing. I loved how simple it was to setup and I loved how the library I was using had its own library for helpful testing functions.
However, I guess the complexity that came with using Jest was mainly because of how late into development I ended up adding it. The app was already a beast, and in comparison, my CLI is still simple, which made me realize an important lesson. The earlier you add a continuous deployment pipeline, the better. In part 2, I'll be looking towards setting up this testing framework with GitHub Actions. Thanks for reading, and see ya next time.
Top comments (0)