Part 2 of building a retail inventory API from scratch.
In Part 1, I explained why I archived my first API and started over. I ended that post with a confession: the new version still had a flat structure and no tests. Not portfolio material yet.
This is what happened next.
The Flat Structure Problem
The v2 codebase worked, but everything lived at the root level. products.py, variants.py, analytics.py, database.py — all siblings, all imports pointing everywhere. When I added a new router, I had to touch three files. When something broke, I had to search across the whole project.
The fix: move everything into an app/ package with proper separation.
app/
├── __init__.py
├── config.py
├── database.py
├── controllers/
├── models/
├── routers/
├── middleware/
└── utils/
└── errors/
Each folder has a clear responsibility. controllers/ is business logic. routers/ is just route definitions. models/ is data shapes. middleware/ handles logging and exception handling. utils/errors/ defines custom exceptions.
This sounds obvious. It wasn't to me three months ago.
The __init__.py Pattern
Every folder has an __init__.py that re-exports everything inside it. The goal: any file in the project should be able to import from a single clean path instead of hunting through nested modules.
# app/controllers/__init__.py
from .products import (
create_product_controller as create_product_controller,
delete_product_controller as delete_product_controller,
get_product_controller as get_product_controller,
get_products_controller as get_products_controller,
update_product_controller as update_product_controller,
)
The as X syntax is not redundant — it tells the linter (ruff) that these re-exports are intentional, not unused imports. Without it, CI fails.
Now a router just does:
from ..controllers import create_product_controller, get_product_controller
Clean. One source of truth per namespace.
Dev Tooling: uv + Taskipy + Ruff
Three tools that changed how I work:
uv — package manager. Replaces pip. Dramatically faster, lock file included, virtual env handled automatically. No more pip install -r requirements.txt and hoping for the best.
Taskipy — task runner. Instead of remembering long commands, I define shortcuts in pyproject.toml:
[tool.taskipy.tasks]
dev = "uvicorn main:app --reload"
build = "docker compose up --build"
lint = "ruff check ."
fix = "ruff check . --fix"
test = "pytest -v"
dev_test = "ruff check . && pytest -v"
task dev_test runs lint then tests in one shot. task fix auto-corrects most ruff violations. Small thing, big quality of life improvement.
Ruff — linter and formatter. Replaces flake8, isort, and black in one tool. Fast, opinionated, catches real problems. Made me write better imports and stop accumulating dead code.
The Bug Safari
Restructuring is not just moving files. Every import path breaks. Here's what I found:
Reserved logging keys. I had this in a controller:
logger.info("Product created", extra={"name": product.name})
This caused a 500 error. name is a reserved key in Python's LogRecord — you can't use it in extra. Renamed to product_name, fixed. The error only appeared at runtime, not at import time.
Route ordering conflict. GET /products/total_value kept returning 422. The analytics router was registered after the products router in main.py. FastAPI matched /total_value as a product ID (a string, hence 422). Moved analytics router first. Fixed.
Missing product_id in variant creation. ProductVariant.model_validate(variant) failed validation because product_id was required but not in the incoming request body — it comes from the URL path. The fix:
db_variant = ProductVariant.model_validate(
variant, update={"product_id": product_id}
)
Pydantic v2's update parameter merges extra fields into the model during validation.
Migrating to Supabase
The original setup used a local PostgreSQL container via Docker Compose. Fine for development, annoying for deployment. Supabase gives you a managed PostgreSQL instance with a connection pooler.
The config change was minimal — just different environment variables:
# Local Docker
DB_HOST=localhost
DB_PORT=5432
# Supabase (Transaction pooler)
DB_HOST=aws-0-eu-central-1.pooler.supabase.com
DB_PORT=6543
One gotcha: Supabase has two pooler types. Session pooler keeps a persistent connection per client. Transaction pooler reuses connections across requests — better for serverless and low-traffic APIs on free tier. I use the Transaction pooler.
The connection string is assembled in database.py:
engine = create_engine(
f"postgresql://{config.db_username}:{config.db_password}@{config.db_host}:{config.db_port}/{config.db_name}",
connect_args={"sslmode": "prefer"},
)
If your password contains special characters (@, #, !), they'll break the URL. Use urllib.parse.quote_plus(password) to encode it.
Writing Tests
This was the part I'd been avoiding. Not because I didn't want tests — because I didn't know how to write them without hitting the real database.
The solution: SQLite in-memory + FastAPI's dependency override pattern.
# tests/conftest.py
from contextlib import asynccontextmanager
from fastapi.testclient import TestClient
import pytest
from sqlmodel import Session, SQLModel, create_engine
from sqlmodel.pool import StaticPool
from app.database import get_session
from main import app
engine = create_engine(
"sqlite://", connect_args={"check_same_thread": False}, poolclass=StaticPool
)
@asynccontextmanager
async def lifespan_override(app):
yield
@pytest.fixture(name="session")
def session_fixture():
SQLModel.metadata.drop_all(engine)
SQLModel.metadata.create_all(engine)
with Session(engine) as session:
yield session
SQLModel.metadata.drop_all(engine)
@pytest.fixture(name="client")
def client_fixture(session):
app.dependency_overrides[get_session] = lambda: session
app.router.lifespan_context = lifespan_override
with TestClient(app) as client:
yield client
app.dependency_overrides.clear()
Three things happening here:
- SQLite in-memory replaces PostgreSQL. No cloud DB needed, no network, instant setup.
-
get_sessiondependency is overridden — every request gets the test session, not a real DB connection. - The lifespan is overridden with a no-op — prevents the app from trying to connect to Supabase on test startup.
Each test gets a fresh database. No shared state, no cleanup between tests.
What the Tests Found
This is the part worth highlighting.
While writing tests for the variant endpoints, I noticed the PATCH and DELETE routes didn't validate product existence before operating on variants:
# Before: missing product check
def update_product_variant_router(product_id, variant_id, ...):
variant = update_product_variant_controller(variant_id, ...)
if not variant:
raise ProductVariantNotFoundException(variant_id)
return variant
You could PATCH /products/9999/variants/1 and it would happily update the variant — even if product 9999 didn't exist. Same for DELETE. Inconsistent behavior, not caught until tests tried to assert PRODUCT_NOT_FOUND.
The fix was two lines per route:
product = get_product_controller(product_id, session)
if not product:
raise ProductNotFoundException(product_id)
This is exactly what tests are for. Not just confirming the happy path — stress-testing assumptions.
Final result: 29 tests, 97% coverage.
TOTAL 412 11 97%
The 3% uncovered is intentional: database error branches that only trigger on real DB failures, and the Supabase connection code skipped by the lifespan override.
Where It Is Now
The API is live on Render, connected to Supabase, monitored by UptimeRobot.
Live docs: retail-inventory-api-yati.onrender.com/docs
What's still missing: integration tests against the live deployment, an analytics expansion, and a repository pattern refactor I've been putting off.
What's next: a chatbot API. This retail API becomes the data layer. The chatbot queries it.
Transitioning from retail operations to AI engineering. Follow the journey:
GitHub | LinkedIn
Top comments (0)