Python Logging Guide
Why Structured Logging?
Plain-text logs are hard to search, filter, and alert on at scale. Structured
logging (JSON) makes every log line a queryable record — essential for
production observability.
| Feature | Plain Text | Structured (JSON) |
|---|---|---|
| Human-readable | Yes | With tooling |
| Machine-parseable | Regex needed | Native |
| Searchable | grep | Field queries |
| Alerting | Pattern match | Exact field match |
| Context (request_id) | Manual | Automatic |
Configuration Strategy
Use environment-specific YAML configs:
configs/
├── logging_dev.yaml # Colorized console, DEBUG level
├── logging_prod.yaml # JSON to stdout + file, INFO level
└── logging_test.yaml # WARNING only, minimal output
Load with one call:
from src.setup import configure_logging
configure_logging("prod")
Request Context Propagation
Every log line should identify which request it belongs to. This toolkit
uses contextvars (works with asyncio, threads, and sync code):
from src.context import set_context, generate_request_id
# In middleware (automatic):
set_context(request_id=generate_request_id())
# In your code:
set_context(user_id="usr-42", tenant="acme")
# Every log line now includes these fields automatically
Middleware Integration
FastAPI / Starlette (ASGI)
from src.middleware import ASGILoggingMiddleware
app.add_middleware(ASGILoggingMiddleware)
Flask / Django (WSGI)
from src.middleware import WSGILoggingMiddleware
app.wsgi_app = WSGILoggingMiddleware(app.wsgi_app)
Both middleware:
- Generate or extract
request_idfrom headers - Set context variables
- Log request start/finish with duration
- Clean up context after the request
Filtering Noise
Suppress third-party loggers
from src.filters import SuppressLoggerFilter
handler.addFilter(SuppressLoggerFilter(["urllib3", "botocore", "asyncio"]))
Rate-limit repeated messages
from src.filters import RateLimitFilter
handler.addFilter(RateLimitFilter(period_seconds=60))
Performance Tips
Use QueueHandler in production —
create_async_handler()offloads
I/O to a background thread so logging never blocks your request.Set appropriate levels — Don't log DEBUG in production. Every log
line has a cost (CPU, I/O, storage).Use lazy formatting —
logger.info("User %s", user_id)is faster
thanlogger.info(f"User {user_id}")because formatting is skipped
if the level is disabled.Rotate logs — Use
RotatingFileHandlerorTimedRotatingFileHandler
to prevent disk exhaustion.
Common Patterns
Add context to exceptions
try:
process_order(order_id)
except Exception:
logger.exception("Failed to process order", extra={"order_id": order_id})
Structured extra fields
logger.info(
"Payment processed",
extra={"amount": 49.99, "currency": "USD", "order_id": "ORD-123"},
)
Conditional logging
if logger.isEnabledFor(logging.DEBUG):
# Expensive serialization only when DEBUG is active
logger.debug("Full payload: %s", json.dumps(payload, indent=2))
Anti-Patterns
-
print()instead of logging — No levels, no formatting, no routing. - Logging sensitive data — Never log passwords, tokens, or PII.
- Catching and logging without re-raising — Swallows the error.
-
String concatenation in log calls — Use
%splaceholders instead. -
One logger for everything — Use
logging.getLogger(__name__)per module.
This is 1 of 14 resources in the Python Developer Pro toolkit. Get the complete [Python Logging & Config] with all files, templates, and documentation for $19.
Or grab the entire Python Developer Pro bundle (14 products) for $159 — save 30%.
Top comments (0)