As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Writing tests for Python code changed my entire relationship with my own work. Before I learned to test properly, I shipped bugs that I didn’t know existed until a user screamed. Then I started treating testing as a craft. I read a lot, broke things on purpose, and eventually discovered eight techniques that turned testing from a chore into a superpower.
The first technique is using fixtures to set up and tear down test environments. In pytest, a fixture is a function decorated with @pytest.fixture that returns something your test needs, like a database connection or a temporary file. The magic is that pytest handles creation and cleanup automatically. I used to write setup and teardown methods in every test class, which was repetitive and easy to mess up. Now I write a fixture once and reuse it across many tests.
import pytest
import tempfile
from pathlib import Path
@pytest.fixture
def temp_dir():
with tempfile.TemporaryDirectory() as tmp:
yield Path(tmp)
def test_file_exists(temp_dir):
file = temp_dir / "note.txt"
file.touch()
assert file.exists()
Notice how the fixture yields a path. After the test finishes, pytest runs the cleanup code after the yield, deleting the temporary directory. This means each test starts fresh. I don’t have to remember to delete things. Fixtures can also be scoped to a module or session. For a database that takes time to create, I set scope="module" so it’s created once for all tests in the file.
@pytest.fixture(scope="module")
def db_conn():
conn = create_database()
yield conn
conn.close()
I also use parametrized fixtures. For example, if I have a function that reads CSV, JSON, or XML files, I can write one fixture that generates each format and then tests that handle each one. This reduces duplication and keeps tests clean.
The second technique is mocking with unittest.mock. Mocking lets me replace real objects with fake ones that record how they were called and return predefined values. This is essential for isolating code that talks to external services. When I write code that fetches user data from an API, I don’t want my tests to hit the real API. Instead, I mock the requests.get call.
from unittest.mock import Mock, patch
def fetch_user(user_id):
response = requests.get(f"https://api.example.com/users/{user_id}")
return response.json()
def test_fetch_user():
mock_response = Mock()
mock_response.json.return_value = {"name": "Alice"}
with patch("requests.get", return_value=mock_response):
result = fetch_user(1)
assert result["name"] == "Alice"
Here, patch replaces the real requests.get with a function that returns mock_response. The mock’s json method returns a dictionary. After the test, requests.get is restored. I also check that the mock was called with the correct URL using mock_get.assert_called_once_with(...). This helps me verify that my code hasn’t changed the URL it fetches.
Mocking isn’t just for functions. I mock properties of objects using PropertyMock. I mock context managers by setting the __enter__ and __exit__ returns. It took me a while to understand how to mock async functions, but once I learned, testing async code became easy.
The third technique is parametrized tests. Without parametrize, you end up writing a separate test for each input. That’s tedious and it’s easy to miss edge cases. pytest’s @pytest.mark.parametrize lets me run the same test logic with multiple inputs.
import pytest
def safe_divide(a, b):
if b == 0:
raise ValueError("Division by zero")
return a / b
@pytest.mark.parametrize("a,b,expected", [
(10, 2, 5.0),
(7, 3, 7/3),
(-1, 1, -1.0),
(0, 5, 0.0),
])
def test_divide(a, b, expected):
assert safe_divide(a, b) == expected
@pytest.mark.parametrize("a,b", [
(5, 0),
(0, 0),
])
def test_divide_by_zero(a, b):
with pytest.raises(ValueError):
safe_divide(a, b)
I make a habit of including edge cases: zero, negative numbers, large numbers. Parametrized tests make this trivial. They also serve as documentation: anyone reading the test can see the expected behavior for different inputs at a glance.
The fourth technique is property-based testing with the hypothesis library. Instead of writing explicit examples, I tell Hypothesis the shape of the data I want (list of integers, dictionary of text to booleans, etc.), and it generates hundreds of random examples for me. It finds failures I never thought to test.
from hypothesis import given, strategies as st
@given(st.lists(st.integers()))
def test_reverse_twice_is_original(lst):
assert lst[::-1][::-1] == lst
@given(st.floats(allow_nan=False))
def test_abs_nonnegative(x):
assert abs(x) >= 0
Hypothesis is especially good at finding floating point edge cases, empty lists, and strings with unicode. It will try to shrink the failing input to the smallest example that breaks your test. I once had a function that parsed dates, and Hypothesis found a date string that my parser couldn’t handle because of timezone offsets. I would never have written that specific test.
When I do property-based testing, I think about invariants: what should always be true for my function? For a sort function, the output should be sorted and contain the same elements as the input. For a checksum, the result should be consistent for the same input. This approach forces me to understand my code at a deeper level.
The fifth technique is patching environment variables and time. Code that depends on os.environ or the current time is notoriously hard to test. pytest’s monkeypatch fixture lets me change environment variables in a test. For time, I use freezegun to freeze the clock.
import os
from freezegun import freeze_time
from datetime import datetime
def is_business_hours():
return 9 <= datetime.now().hour < 17
@freeze_time("2024-03-15 14:30:00")
def test_midday():
assert is_business_hours() is True
@freeze_time("2024-03-15 20:00:00")
def test_evening():
assert is_business_hours() is False
Without freezegun, testing a function that checks business hours would require me to run the test at a specific time of day, which is impractical. Now I can test any hour I want.
Similarly, monkeypatch is great for configuration. I once had a module that read a database URL from an environment variable. Using monkeypatch.setenv, I set it to a test SQLite database in my tests without affecting the production environment.
def test_uses_env_var(monkeypatch):
monkeypatch.setenv("DATABASE_URL", "sqlite:///:memory:")
# now any code that reads the env var will get the test URL
The sixth technique is testing async code. Python’s async/await is beautiful, but testing requires special tools. pytest-asyncio adds an asyncio marker that tells pytest to run the test in an event loop. For mocking async functions, I use AsyncMock.
import pytest
from unittest.mock import AsyncMock, patch
import aiohttp
async def fetch_data(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.json()
@pytest.mark.asyncio
async def test_fetch_data():
mock_session = AsyncMock()
mock_response = AsyncMock()
mock_response.json = AsyncMock(return_value={"id": 1})
mock_session.get.return_value.__aenter__.return_value = mock_response
with patch("aiohttp.ClientSession", return_value=mock_session):
result = await fetch_data("http://example.com/api")
assert result == {"id": 1}
The key is setting __aenter__ and __aexit__ properly. AsyncMock handles the awaits just like Mock handles regular calls. Once I learned the pattern, I could test any async code confidently.
The seventh technique is integration testing with testcontainers. Unit tests mock everything, but sometimes you need to test against a real database or message queue. testcontainers spins up a Docker container for the duration of the test. For example, to test a PostgreSQL query, I can use PostgresContainer.
from testcontainers.postgres import PostgresContainer
import psycopg2
def test_postgres_insert():
with PostgresContainer("postgres:14") as postgres:
conn = psycopg2.connect(
host=postgres.get_container_host_ip(),
port=postgres.get_exposed_port(5432),
user=postgres.POSTGRES_USER,
password=postgres.POSTGRES_PASSWORD,
dbname=postgres.POSTGRES_DB
)
cur = conn.cursor()
cur.execute("CREATE TABLE items (id SERIAL PRIMARY KEY, name TEXT)")
cur.execute("INSERT INTO items (name) VALUES (%s)", ("test",))
cur.execute("SELECT name FROM items")
assert cur.fetchone()[0] == "test"
cur.close()
conn.close()
The container starts when the with block begins and stops when it ends. This is much better than mocking a database, because it picks up real SQL quirks. I also use testcontainers for Redis, RabbitMQ, and even Selenium.
The eighth technique is coverage analysis. Running tests is only half the battle. You need to know which lines of code your tests actually execute. pytest-cov wraps coverage.py and generates reports. I set a coverage threshold so that if coverage drops below a certain percentage, the build fails. This gives me a safety net.
# In your conftest or pyproject.toml:
# [tool.pytest.ini_options]
# addopts = "--cov=my_module --cov-report=html --cov-fail-under=80"
I also use branch coverage (--cov-branch) to ensure every if and else is tested. A simple line coverage might show 100% but miss an untaken branch. Coverage reports also highlight code I forgot to test, which reminds me to write additional tests.
Now, I want to share how I combine these techniques. For a recent web scraping project, I used parametrize for different URL patterns, mock for the HTTP requests, property-based testing for parsing HTML edge cases, and freezegun to simulate time-based caching. Integration tests against a local Selenium container verified the full flow. The coverage report told me I missed error handling for network timeouts, so I added that test.
One personal trick: I always write a test that fails first. I think about a bug I’ve seen before and write a test that exposes it. Then I fix the code and watch the test pass. This is test-driven development in practice, and it’s incredibly satisfying.
Another tip: keep your tests fast. Unit tests should run in milliseconds. If they take seconds, you won’t run them often. Use fixtures carefully – don’t create a database connection for every test if you only need it for a few. Mock heavy operations. Integration tests can be slower, but isolate them in a separate directory and run them only on CI.
I also believe tests should be readable. Name your test functions clearly: test_calculate_tax_returns_zero_for_zero_income. Use descriptive variable names. Write comments only when the logic is non-obvious. The test itself is documentation of what the code should do.
Finally, don’t be afraid to refactor tests along with code. If your tests are tightly coupled to implementation details, they will break when you change the code. Mock at boundaries (like external APIs) but not inside your own classes. Use fixtures to set up state, not to mock internal behavior. This way, tests stay robust as your code evolves.
I’ve been using these eight techniques for years, and they’ve saved me countless hours of debugging. They turned me from someone who feared changing code into someone who welcomes improvements. Testing isn’t a burden – it’s a guide. By mastering fixtures, mocking, parametrization, property-based testing, environment control, async testing, integration containers, and coverage analysis, you can write Python that works correctly, behaves predictably, and is a joy to maintain.
Start with one technique. Write a simple fixture. Then add a mock to your next test. Soon you’ll wonder how you ever lived without them.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)