Microservices Patterns Guide
A reference for the architectural patterns used in this kit and when to
apply them in production.
1. API Gateway
What: A single entry point that routes requests to downstream services
and handles cross-cutting concerns (auth, rate limiting, logging).
When to use: Always. Even with two services, a gateway simplifies
client integration and centralises security enforcement.
Client → API Gateway → User Service
→ Order Service
Trade-offs:
- Pro: Single TLS termination point, centralised auth
- Con: Single point of failure (mitigate with replicas and health checks)
2. Database per Service
What: Each microservice owns its data store. Services never access
another service's database directly.
When to use: Default choice. Sharing a database couples services at
the schema level, defeating the purpose of microservices.
Implementation:
# user-service owns its own SQLite / Postgres
DATABASE_URL=sqlite:///./users.db
# order-service has its own store
DATABASE_URL=sqlite:///./orders.db
To query data owned by another service, use the service client (HTTP)
or listen for domain events.
3. Domain Events (Event-Driven Communication)
What: Services communicate by publishing and subscribing to events
rather than making synchronous calls.
When to use: When a state change in one service should trigger
behaviour in another without tight coupling.
Order Service ──publishes──▶ "order.created"
│
User Service ◀──subscribes────────┘
(sends confirmation email)
Transport options (swap the EventBus implementation):
| Transport | Latency | Durability | Complexity |
|----------------|---------|------------|------------|
| In-process | ~0 ms | None | Trivial |
| Redis Streams | ~1 ms | Optional | Low |
| RabbitMQ | ~2 ms | Yes | Medium |
| Apache Kafka | ~5 ms | Yes | High |
4. Circuit Breaker
What: Wraps inter-service HTTP calls. After N consecutive failures,
the circuit "opens" and immediately rejects requests for a recovery
period, preventing cascading failures.
States:
┌──────────┐ N failures ┌──────────┐
│ CLOSED │ ─────────────────▶ │ OPEN │
│ (normal) │ │ (reject) │
└──────────┘ └────┬─────┘
▲ │ recovery timeout
│ 1 success │
└──────────────────── ┌─────────▼──────┐
│ HALF-OPEN │
│ (probe 1 req) │
└────────────────┘
Configuration (in ServiceClient):
client = ServiceClient(
base_url="http://user-service:8001",
circuit_failure_threshold=5, # open after 5 failures
circuit_recovery_timeout=30.0, # try again after 30s
)
5. Correlation IDs (Distributed Tracing)
What: A unique ID assigned to each external request, propagated
through all inter-service calls via the x-correlation-id header.
Why: In production, a single user click can trigger requests across
5+ services. Without a correlation ID, piecing together the log trail
is impossible.
[a1b2c3d4] POST /api/orders ← gateway
[a1b2c3d4] GET /users/usr_abc123 ← order-service → user-service
[a1b2c3d4] PUBLISH order.created ← order-service
Implementation: The CorrelationMiddleware in shared/tracing.py
handles ID generation and propagation automatically.
6. Health Checks
What: Each service exposes a /health endpoint. The gateway
aggregates them into a composite health report.
Kubernetes-style probes:
| Probe | Purpose | Endpoint |
|-----------|-----------------------------|-----------|
| Liveness | "Is the process alive?" | /health |
| Readiness | "Can it serve traffic?" | /ready |
| Startup | "Has it finished booting?" | /health |
Production config (in docker-compose.prod.yml):
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8001/health"]
interval: 30s
timeout: 5s
retries: 3
7. Retry with Exponential Back-Off
What: On transient failures (network blips, 503s), retry the
request after an exponentially increasing delay.
Attempt 1 → fail → wait 0.5s
Attempt 2 → fail → wait 1.0s
Attempt 3 → fail → give up (or open circuit breaker)
Why: Most transient errors resolve within seconds. Retrying
immediately causes thundering herd problems; back-off spreads the load.
8. Strangler Fig Migration
What: When migrating from a monolith, route traffic through the
gateway. Gradually replace monolith endpoints with microservices.
Gateway
├── /api/users → NEW User Service
├── /api/orders → NEW Order Service
└── /api/legacy → OLD Monolith (shrinking)
Steps:
- Deploy the gateway in front of the monolith.
- Extract one bounded context at a time.
- Route new traffic to the microservice; keep the monolith as fallback.
- Remove the monolith route once fully migrated.
Quick-Reference: When to Split a Service
| Signal | Action |
|---|---|
| Two teams own the same codebase | Split |
| Module has independent deploy cycle | Split |
| Feature requires a different DB | Split |
| Two modules share the same table | Keep together |
| Feature is < 200 lines of code | Keep together |
By Datanest Digital — Microservices Patterns Guide
This is 1 of 14 resources in the Python Developer Pro toolkit. Get the complete [Python Microservices Kit] with all files, templates, and documentation for $49.
Or grab the entire Python Developer Pro bundle (14 products) for $159 — save 30%.
Top comments (0)