Testing is an essential part of software development that often takes a back seat to feature building. But as I recently learned, good tests do more than just catch bugs—they make your code resilient, modular, and a lot easier to maintain. In this post, I’ll walk through my recent experience setting up and running tests for my Code Complexity Analyzer project, including which tools I used, how I tackled mocking responses from a Large Language Model (LLM) API, and the lessons I picked up along the way.
Choosing a Testing Framework and Tools
For this project, I chose the following tools:
- pytest: A powerful testing framework for Python, pytest is flexible and easy to use, with a large community and plenty of plugins. Its clear syntax and support for complex testing scenarios made it a natural choice for this project.
- pytest-watch: To speed up development, pytest-watch automatically re-runs tests when files change. This tool has proven invaluable for TDD (Test-Driven Development) workflows.
Each tool serves a unique role in the testing process. While pytest
is the main framework, pytest-watch
allows for an efficient feedback loop when running tests. Using these tools in combination helped create a seamless and effective testing environment.
Setting Up the Tools
Here’s how I set up these tools in my project:
Install and Configure: I added
pytest
andpytest-watch
to the project’srequirements.txt
file. This ensured that any developer working on the project could install them easily.Using the Makefile: I added targets in the
Makefile
for running all tests, running a single test, and automatically rerunning tests on file changes. Here’s a snippet of theMakefile
:
test:
$(TEST_RUNNER)
pytest
# Run a specific test file or function
test-one:
@if [ -z "$(TEST)" ]; then \
echo "Please specify a test file or function with TEST=<file>::<function>"; \
else \
pytest $(TEST); \
fi
# Automatically rerun tests on file changes
watch:
@echo "Watching for file changes... Press Ctrl+C to stop."
@while inotifywait -r -e modify,create,delete ./; do \
clear; \
pytest; \
done
With these commands, I could easily run all tests, target specific tests, and set up an automatic test rerun on file changes. This setup kept my workflow fast and the codebase reliable as I iterated on the project.
Mocking LLM Responses
One of the unique challenges of this project was dealing with responses from an external LLM API. Testing directly against the live API would be slow, brittle, and dependent on network stability and API credits. Instead, I used mocking.
To simulate API responses, I used unittest.mock
in pytest
to mock the HTTP responses from the LLM API. This way, I could control the response data and ensure that tests would run consistently.
This approach allowed me to simulate both successful and error responses from the API, testing how the code handles different scenarios without relying on real API calls.
Learning from the Testing Process
Writing test cases made me think critically about my code’s structure and exposed areas for improvement. For example, I initially designed the code with a CLI focus, but breaking down functionality into smaller, testable functions improved test coverage and made the code more modular and maintainable.
Key Takeaways and "Aha!" Moments
Testing Forces Modular Design: Splitting code into smaller, focused functions improved both the readability and testability of my code. I had to rework some of my initial code to create functions that were easier to test, and this made the project more flexible and reliable.
Edge Cases Matter: Testing exposed edge cases I hadn’t considered initially, such as handling empty strings or invalid API keys. Mocking responses let me simulate these cases effectively, which highlighted how the code would handle real-world scenarios and prevented potential bugs from going unnoticed.
The Power of Automation: Using
pytest-watch
to automatically rerun tests on file changes streamlined my workflow. Tools like these are game-changers for productivity, and I plan to use them in all future projects.
Bugs and Surprises
During testing, I discovered a few minor bugs, particularly around error handling. For instance, without a valid API key or with a misconfigured endpoint, the code initially didn’t handle the error gracefully. Adding tests forced me to write better error messages and handle unexpected input more robustly.
Reflections and Future Testing Plans
This project was a great introduction to automated testing for me. I’ve tested code before but usually informally, by manually running it and observing the output. This experience taught me the value of structured, automated testing. Testing ensures that each change improves the code without introducing new bugs, and it’s particularly important for team environments where multiple developers work on the same codebase.
In the future, I’ll be much more intentional about writing tests early in the development process. Building testability into the design from the start and using tools like pytest
and pytest-watch
will save time and improve code quality in the long run.
Final Thoughts
Automated testing might seem daunting at first, but the time saved and the peace of mind gained are invaluable. I hope this post helps other developers see the benefits of testing frameworks, and that my setup process offers a clear starting point for those looking to implement similar tools in their projects.
Top comments (0)