Every Python project eventually needs the same test infrastructure. You write conftest.py from scratch, reach for the same Hypothesis patterns, copy async fixture code from a Stack Overflow answer that's three major versions out of date.
I've done this enough times that I started keeping the files. This week I packaged them up.
The Python Testing Toolkit is available now on Gumroad: four production-ready Python files — conftest_production.py, hypothesis_strategies.py, async_test_patterns.py, and parametrize_factories.py. $49, one-time download, MIT license.
Here's exactly what's in each one.
conftest_production.py — the conftest you'd write if you had eight hours
Most conftest.py tutorials show you database fixtures or HTTP mocking. Almost none show both in a form that composes cleanly with async tests, FastAPI dependency overrides, and environment variable isolation.
This file handles all of it:
# Transaction rollback per test — no cleanup needed
def test_creates_user(db_session):
user = User(name="Alice", email="alice@example.com")
db_session.add(user)
db_session.flush()
result = db_session.get(User, user.id)
assert result.name == "Alice"
# rolls back automatically — nothing persists between tests
# FastAPI dependency overrides without boilerplate
def test_endpoint_with_mock_service(client_with_override):
mock_service = MagicMock(return_value={"id": 1, "name": "Alice"})
with client_with_override({get_user_service: lambda: mock_service}) as c:
response = c.get("/users/1")
assert response.status_code == 200
# Environment variables that reset between tests
def test_feature_flag(env_override):
with env_override(ENABLE_BETA="true", DATABASE_URL="sqlite:///:memory:"):
result = function_that_reads_env()
assert result.used_beta_path
# ENABLE_BETA is unset again here — no state leaks
The database fixture defaults to SQLite with transaction rollback per test. Swap to Postgres by setting a DATABASE_URL environment variable — the fixture handles both without modification. HTTP mocking covers both httpx (via respx) and requests (via responses) because most projects have both.
Drop this into your project root as conftest.py. pytest finds everything automatically.
hypothesis_strategies.py — stop using .text() for emails
Hypothesis ships with text(), integers(), floats(), dates(). These are correct types for testing algorithms. They're useless for testing application code that expects email addresses, usernames, financial amounts, or URL paths.
When Hypothesis generates "\x00\x7f\ud800" as a test email, your validator fails for the wrong reason. You fix the test to exclude those cases, not the code. The test becomes meaningless.
This file has 30+ strategies for the types that appear in every domain:
from hypothesis import given
from hypothesis_strategies import emails, usernames, monetary_amounts
@given(email=emails(), username=usernames())
def test_user_registration(email, username):
user = User.create(email=email, username=username)
assert user.email == email.lower()
assert len(user.username) >= 3
from decimal import Decimal
from hypothesis_strategies import monetary_amounts, order_data
@given(amount=monetary_amounts(min_value=Decimal("0.01"), max_value=Decimal("10000.00")))
def test_payment_processing(amount):
result = process_payment(amount)
assert result.status == "success"
assert result.charged == amount
# Generate complete valid payloads in one line
@given(registration=user_registration_data())
def test_registration_endpoint(client, registration):
response = client.post("/register", json=registration)
# Should be 201 (success) or 422 (validation error), never 500
assert response.status_code in (201, 422)
The user_registration_data() strategy composes emails(), usernames(), and optionally phone_numbers_e164() into a valid registration dict. order_data() generates line items with Decimal amounts that sum correctly. Each strategy accepts parameters for tightening bounds when you need to test specific edge cases.
The Hypothesis article earlier in this series covers the basics. This file is the production-grade implementation.
async_test_patterns.py — the patterns that don't appear in the pytest-asyncio README
Async testing has three problems that show up together:
- Your
httpx.AsyncClientdoesn't know about FastAPI's lifespan (startup/shutdown events) - Background tasks run after the response returns, so your assertions fire before the side effects settle
- AsyncMock syntax is verbose and inconsistent across Python versions
# Lifespan-aware async client — startup and shutdown events fire correctly
async def test_create_post(async_http_client):
response = await async_http_client.post("/posts", json={
"title": "Hello",
"body": "World",
})
assert response.status_code == 201
# Background tasks: drain before asserting side effects
async def test_welcome_email_sent(async_http_client, drain_background_tasks):
response = await async_http_client.post("/users", json={"email": "new@example.com"})
assert response.status_code == 201
await drain_background_tasks() # flush FastAPI's background task queue
assert email_inbox.has_message_for("new@example.com")
# AsyncMock helpers — concise patterns for common cases
from async_test_patterns import async_mock_returning, async_mock_raising
async def test_service_timeout(async_http_client):
import httpx
with patch("myapp.external.fetch_data", async_mock_raising(httpx.TimeoutException(""))):
response = await async_http_client.get("/data")
assert response.status_code == 503
The file also includes async_db_session (async SQLAlchemy with SAVEPOINT rollback per test), assert_all_resolve() (run a list of coroutines in parallel and assert they all complete within a timeout), and a guide to event loop scope with the actual trade-offs documented.
The most common async test failure — "event loop is closed" — is fixed by adding two lines to pyproject.toml:
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "session"
This file includes that config and explains why.
parametrize_factories.py — readable parametrize that doesn't collapse under its own weight
@pytest.mark.parametrize with more than four cases becomes a list of tuples that nobody can read six months later. The generated test IDs are test_function[param0-param1-param2], which tells you nothing about which case failed.
from parametrize_factories import Case, cases
@pytest.mark.parametrize("case", cases(
Case("valid email", value="user@example.com", expected=True),
Case("no at sign", value="notanemail", expected=False),
Case("empty string", value="", expected=False),
Case("missing domain", value="user@", expected=False),
))
def test_email_validation(case):
assert is_valid_email(case.value) == case.expected
Output:
PASSED test_email_validation[valid email]
PASSED test_email_validation[no at sign]
FAILED test_email_validation[empty string]
# All combinations of dimensions — 9 test cases from 3+3 values
from parametrize_factories import matrix
@pytest.mark.parametrize("combo", matrix(
method=["GET", "POST", "DELETE"],
role=["admin", "viewer", "anonymous"],
))
def test_access_control(combo, client):
response = make_request(method=combo["method"], role=combo["role"])
assert response.status_code != 500
# Skip cases where environment isn't configured
from parametrize_factories import skip_if_missing
import os
@pytest.mark.parametrize("db_url", [
skip_if_missing(os.getenv("POSTGRES_URL"), "POSTGRES_URL not set"),
skip_if_missing(os.getenv("MYSQL_URL"), "MYSQL_URL not set"),
"sqlite:///:memory:",
])
def test_database_connection(db_url):
engine = create_engine(db_url)
...
No pytest-cases dependency — plain pytest throughout.
What it doesn't include
Not tutorial content. No explanations of what @pytest.fixture does or how scopes work. If you're new to pytest, pytest fixtures that actually scale covers that.
Not Django-specific. The HTTP mocking and parametrize factories work anywhere. The database and async fixtures are FastAPI-oriented. The Hypothesis strategies are framework-independent.
Not a subscription, SaaS integration, or CI dashboard. Four Python files. Download once, use in any project, forever.
Getting it
Python Testing Toolkit on Gumroad →
$49 one-time. MIT license. Includes all four files and the README with drop-in instructions, pyproject.toml configuration, and CI integration notes.
If you've been following the Testing Without the Subscription Tax series, this is the practical companion to the articles. The async testing article drops in September — the async_test_patterns.py file from this toolkit is the implementation behind the patterns in that article.
Questions? Drop them in the comments or reach me at @peytongreen_dev.
Part of the Testing Without the Subscription Tax series.
Top comments (0)