Day 23: The Pulse of the System — JSON Logs, StreamHandlers & Async Observability
7 min read
Series: Logic & Legacy
Day 23 / 30
Level: Senior Architecture
⏳ Context: We have established the network boundaries, navigated recursion limits, and explicitly tested the logic. But when a server goes down at 3:00 AM, your test suite won't tell you what happened in production. You need a flight data recorder. Today, we leave print() behind and architect true observability.
"I don't need a debugger; I have print statements."
These are the famous last words of a Junior Engineer before a massive production outage. print() is a local development crutch. It lacks severity levels, it has no context (time, thread ID, exact file line), it cannot be dynamically routed to external dashboards (like Datadog or ELK), and most dangerously—it blocks the CPU during Disk I/O.
Senior Architects do not write "logs". They architect Event Streams. They understand that a log is an immutable record of an event that occurred at a specific point in time, and it must be structured and routed with absolute precision.
▶ Table of Contents 🕉️ (Click to Expand)
- The Hierarchy of Severity (Logging Levels)
- The Triad: Loggers, Handlers & Formatters
- Structured JSON Logging (Machine-Readable)
- The Concurrency Trap & aiologger
- Tracing Exceptions (exc_info)
1. The Hierarchy of Severity
A logging system must filter noise. Python assigns mathematical integer weights to events so you can filter them dynamically. In development, you might want to see everything. In production, you only want to see things that are broken.
- DEBUG (10) — Granular diagnostic info. "Variable x is 42."
- INFO (20) — General operational events. "Server started on port 8080."
- WARNING (30) — Unexpected event, but the system recovers. "API rate limit near."
- ERROR (40) — A specific function failed. "Database connection lost."
- CRITICAL (50) — Total system failure. "Out of memory. Shutting down."
2. The Triad: Loggers, Handlers & Formatters
Most beginners use logging.info("Hello"). This attaches to the Root Logger with a generic configuration, making it impossible to control different files in a large architecture. To engineer an event stream, you must explicitly assemble the Logging Triad:
-
1. The Logger: The interface your code talks to (
logging.getLogger(__name__)). It filters messages based on severity. - 2. The Formatter: The template. It decides what the message looks like (Timestamp, Module, Line Number).
-
3. The Handler: The destination. According to the Twelve-Factor App methodology, web apps should never write to
.txtfiles. They should write unbuffered data tosys.stdoutvia a StreamHandler. The infrastructure (Docker/Kubernetes) captures that stream and routes it to storage.
The 12-Factor StreamHandler Architecture
import logging
import sys
# 1. Use __name__ so the logger is named after the current Python file
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO) # Block DEBUG logs in production
# Prevent logs from propagating to the default root logger (stops double-printing)
logger.propagate = False
# 2. Create the Handler (Write directly to stdout)
handler = logging.StreamHandler(sys.stdout)
# 3. Create the Formatter (Time | Level | File:Line | Message)
formatter = logging.Formatter(
'%(asctime)s | %(levelname)-8s | [%(filename)s:%(lineno)d] | %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
# Assemble the Triad
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.info("Authentication service initialized.")
3. Structured JSON Logging (Machine-Readable)
The plaintext formatter above is great for humans reading a terminal. But in an enterprise, humans don't read logs; machines do.
When you have 50 microservices generating millions of logs per minute, you pipe them into an ELK stack (Elasticsearch, Logstash, Kibana) or Datadog. Plaintext logs require complex Regex to parse. Senior Architects bypass this by logging directly in JSON.
Implementing python-json-logger
# pip install python-json-logger
import logging
from pythonjsonlogger import jsonlogger
import sys
logger = logging.getLogger("json_service")
handler = logging.StreamHandler(sys.stdout)
# The Formatter dynamically converts these keys into a JSON payload
formatter = jsonlogger.JsonFormatter(
'%(asctime)s %(levelname)s %(name)s %(process)d %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
# You can pass extra contextual dictionary data
logger.error("Payment failed", extra={"user_id": 99, "gateway": "stripe"})
{"asctime": "2026-04-09 10:15:30", "levelname": "ERROR", "name": "json_service", "process": 10423, "message": "Payment failed", "user_id": 99, "gateway": "stripe"}
4. The Concurrency Trap & aiologger
In Day 22, we learned about the raw speed of asynchronous environments. But here is the deadly trap: Python's standard logging module is synchronous and blocking.
If you use standard logging inside a high-performance FastAPI or aiohttp application, writing log text requires physical I/O. While the OS buffers that string to stdout, your entire Python Event Loop freezes. 10,000 concurrent users will be stuck waiting just because your server logged an event.
Architects use aiologger. It drops the log message into an async queue and immediately moves on, offloading the physical writing process to a background task.
Non-Blocking Observability
import asyncio
from aiologger import Logger
# Instantiate the async logger with native stdout handlers
async_logger = Logger.with_default_handlers(name="AsyncWorker")
async def process_data():
# Notice the 'await' keyword. This hands control back to the event
# loop while the text physically writes to the stream.
await async_logger.info("Processing started...")
# Simulate async network call
await asyncio.sleep(1)
await async_logger.warning("Data source latency detected.")
asyncio.run(process_data())
5. Tracing Exceptions (exc_info)
When a try/except block catches a fatal error, logging logger.error("Failed") is useless. You lose the traceback (the exact file and line number where the code crashed). By passing exc_info=True, the logger automatically injects the full stack trace into the log payload.
Capturing the Stack Trace
try:
result = 10 / 0
except ZeroDivisionError as e:
# Appends the entire traceback to the log automatically
logger.error("Mathematical anomaly detected!", exc_info=True)
🔮 The Upcoming Backend Series (gRPC & Framework Internals)
In this Logic & Legacy series, we are establishing core architecture. In our upcoming Backend Engineering Series, we will leave HTTP/REST behind and tear open High-Performance Frameworks. We will teach you:
- gRPC & Protobufs: Why microservices use binary RPCs instead of JSON over HTTP.
-
aiodns: How asynchronous DNS resolution prevents IP-lookup thread blocking. -
C HTTP Parsers: How
http-parserprocesses raw bytes at C-speed. -
frozenset: Whyaiohttpuses immutable sets for HTTP methods (GET, POST) for rapid O(1) hash lookups. -
multidict&yarl: Handling duplicate headers and mathematically correct URL parsing.
🛠️ Day 23 Project: The Observability Matrix
Build a production-grade logging architecture.
- Install
python-json-logger. Architect a standard Triad (Logger, StreamHandler, JsonFormatter). - Write a
try/exceptblock that intentionally causes anIndexErrorand log it usingexc_info=True. - Install
aiologger. Create anasyncscript that concurrently triggers 10 tasks, proving the logs appear asynchronously without blocking.
🔥 PRO UPGRADE (The Concurrency Tracking Challenge)
In an async app handling 10,000 users, logs overlap. If you see an ERROR, how do you know which user caused it? Your challenge: Research Python's built-in contextvars module. Use it to inject a unique request_id (UUID) at the start of an async function. Configure your logger or formatter so that every subsequent log generated by that specific async flow automatically includes the request_id, without you having to manually pass it to every function.
📚 Observability Resources
- The Twelve-Factor App (Logs) — The industry standard for treating logs as event streams.
- Python Logging HOWTO — The official standard library configuration guide.
- aiologger Repository — Documentation for the async logging engine.
- Context Variables (contextvars) — To solve the Pro Upgrade challenge.
The Pulse: Monitored
You now have structured, non-blocking visibility into the dark corners of your architecture. Hit Follow to catch Day 24, where we tackle the exact mechanisms for catching the errors those logs expose: Exceptions & Fault Tolerance.
[← Previous
Day 22: The Network Boundary](https://logicandlegacy.blogspot.com/2026/03/day-22-sockets.html)
[Next →
Day 24: Exceptions & Fault Tolerance](#)
Originally published at https://logicandlegacy.blogspot.com
Top comments (0)