Cell-Based Architecture: How Slack, DoorDash, and Amazon Contain Failures Before They Become Disasters
In the early days of distributed systems, the goal was straightforward: break the monolith. Take a giant codebase, slice it along domain lines, and let independent services scale, deploy, and fail in isolation. Microservices became the dominant paradigm — and for a while, it worked brilliantly.
Then systems grew.
Dozens of services became hundreds. Databases multiplied. Shared caches became load-bearing infrastructure. Engineers added circuit breakers, retry logic, and timeouts — all good tools, all treating symptoms. And slowly, almost imperceptibly, they had recreated the very thing they were trying to escape — just with network hops between the pieces.
The Networked Monolith was born.
Today, companies like Slack, DoorDash, and Amazon can't afford for a bad deployment in one corner of their system to ripple across the entire platform. The answer they've converged on is Cell-Based Architecture — sometimes called the Bulkhead Pattern applied at infrastructure scale.
This post breaks down exactly how it works, why it matters, and how to implement it — with real examples from companies that have done it in production.
1. The Problem: From Monolith to "Networked Monolith"
A classic three-tier microservices setup looks clean on a whiteboard: a pool of web servers, a pool of application servers, a shared database cluster with a read replica, and a global Redis cache. Traffic round-robins across all resources. Horizontal scaling is easy — just add more pods.
The problem isn't scalability. It's interconnectivity.
Every service depends on the same shared infrastructure. And in a pooled architecture, a small problem in one corner can — and often does — spread to everyone.
Binary Failures vs. Gray Failures
Here's a counterintuitive truth: binary failures are actually easy to handle. A server crashes, a database goes offline — the load balancer marks the node unhealthy and stops sending traffic. Automation handles it.
Gray Failures are the real killers. They're not quite broken enough to trigger automated remediation, but they're degraded enough to cause real harm. Think of it like a highway where one lane is moving at 10 mph instead of 70 — cars don't stop flowing, but they back up in ways that eventually gridlock the entire system.
Gray Failure Cascade — Classic Microservices
Redis Node (AZ-1a): intermittent 200ms latency spikes
│
▼
App Server Pool: thread pool slowly fills with blocked connections
│
▼
API Gateway: requests queue up, p99 latency climbs from 80ms → 4s
│
▼
All Users: 100% of traffic degraded because of one slow cache node
The cascade above is not hypothetical. It's a pattern that plays out at scale across the industry with alarming regularity.
Common Gray Failure types:
- Packet loss in a single Availability Zone
- A Redis shard slowing under memory pressure
- A retry storm from a bad deployment config
- Connection pool exhaustion from a slow downstream
- CPU throttling from a noisy neighbor VM
- DNS resolution latency spikes
What makes this devastating in a pooled architecture:
- 10% slow requests → 100% thread pool exhaustion
- One bad AZ affects requests pinned to other AZs (through shared dependencies)
- Retries amplify load on already-sick services
- Health checks miss partial degradation entirely
- Cascading timeouts cause global slowdown
- Blast radius = the entire platform
⚠️ The Core Problem: In a pooled microservice architecture, the blast radius of any single failure — however small or localized — is effectively global. There is no containment built into the design itself.
The standard mitigations — circuit breakers, bulkhead thread pools, timeouts, retries with jitter — all help at the margins. But they treat symptoms. None of them address the root cause: all requests share all infrastructure.
To truly contain failures, you need isolation at the infrastructure level, not just the code level.
2. What Cell-Based Architecture Actually Is
The name comes from marine engineering. Ships are built with watertight bulkheads — internal walls that divide the hull into isolated compartments. If the hull is breached, only the affected compartment floods. The rest of the ship stays intact.
The Titanic famously failed because its bulkheads weren't tall enough. Water spilled over the top and flooded adjacent compartments one by one. The ship was designed for partial failure tolerance but the design assumption was violated — and when it was, the failure cascaded exactly like a poorly-designed distributed system.
"A Cell is a self-contained, independently deployable slice of your entire production system — not just a service boundary, but a complete vertical slice of your stack."
Anatomy of a Cell
A Cell isn't a single service or a single database. It's a complete replica of your application stack, scoped to serve a bounded subset of your users or tenants.
Think of it as running a mini version of your entire platform — independently — for a specific group of customers. If that mini-platform has a problem, only those customers are affected. Everyone else continues without interruption.
┌──────────────────────── Cell-04 ──────────────────────────┐
│ │
│ ┌─────────────────┐ ┌──────────────────────────┐ │
│ │ API Containers │ │ Background Workers │ │
│ │ (3x replicas) │ │ (Kafka consumer group) │ │
│ └────────┬────────┘ └──────────┬───────────────┘ │
│ │ │ │
│ ┌────────▼───────────────────────────▼───────────────┐ │
│ │ Cell Internal Network │ │
│ └──────────────────────┬──────────────────────────────┘ │
│ │ │
│ ┌────────────────┐ │ ┌───────────────────────┐ │
│ │ PostgreSQL Shard│ │ │ Redis Cache Cluster │ │
│ │ (Primary + │ │ │ (Cell-local, 3 nodes) │ │
│ │ 2 Replicas) │◄──┘ └───────────────────────┘ │
│ └────────────────┘ │
│ │
│ Serves: Org IDs 4001–5000 Region: us-east-1b │
└────────────────────────────────────────────────────────────┘
Every Cell owns its own:
- Compute — its own API and worker containers, completely isolated from other cells
- Storage — its own database shards (not a connection to a shared cluster)
- Cache — its own Redis or Memcached cluster
- Queues — its own Kafka partitions or SQS queues
- Observability — its own metrics namespace, so dashboards can show per-cell health independently
This last point is subtle but important. When every cell has its own metrics namespace, a degraded cell visually stands out in your dashboard. You're not fishing through global averages trying to find which 1% of requests are broken. You can see exactly which cell is sick and act immediately.
The Golden Rule of Cells
⚡ Cells do not talk to each other. A request that enters Cell-04 lives and dies in Cell-04. No cross-cell service calls. No shared databases. No shared caches.
This constraint is non-negotiable. The moment you allow cross-cell communication, you've reintroduced the dependency chains you were trying to eliminate. One cell's slowness bleeds into another, and you're back to a networked monolith — just with extra steps.
If a request legitimately needs data owned by another cell's tenant, you solve that problem at the routing layer (by making sure requests are sent to the correct cell) or via async replication — not by letting cells call each other synchronously.
The Global Control Plane
Cells aren't completely isolated from the universe. There's a thin Global Control Plane that sits outside all cells and handles cross-cutting concerns: authentication token validation, cell assignment mappings, global feature flags, and billing state.
The key discipline here is to make this control plane read-mostly and cacheable at the cell edge. Cells should cache the things they need from the control plane locally. If the control plane has a brief outage, individual cells should be able to continue serving traffic using their cached state. A control plane outage should never cascade into a cell outage.
3. The Routing Layer: The Heart of the System
If cells are the body of this architecture, the Router — also called the Cell Gateway — is the brain. Everything depends on it correctly and deterministically mapping every incoming request to exactly one cell.
Getting this layer right is where most of the design complexity lives.
Step 1 — Choose Your Partition Key
This is the most consequential design decision you'll make in a cell-based system. The partition key determines how users are sliced across cells — and the wrong choice can make the architecture impossible to operate.
A good partition key has three properties:
- It co-locates related data — the vast majority of data a request needs should belong to the same partition key value
- It produces reasonably uniform distribution — no single value should represent a disproportionate amount of traffic
- It's present on every request — you need to be able to extract it at the router without additional lookups
| Partition Key | Best For | Example |
|---|---|---|
Organization ID |
B2B SaaS, multi-tenant platforms | Slack, Notion, Linear |
User ID |
Consumer apps with isolated user data | Social platforms, gaming |
Geographic Region |
Global products with compliance needs | EU-only data residency |
Delivery / Order ID |
Transaction-scoped workflows | DoorDash, ride-hailing |
Store / Merchant ID |
Marketplace platforms | e-commerce aggregators |
⚠️ Watch Out For: Avoid partition keys that are too coarse (e.g.,
country— one cell would need to handle all of the US) or too fine-grained (e.g.,session_id— you'd need billions of cells). The sweet spot is an entity that owns a meaningful, bounded set of data.
Step 2 — The Cell Mapping Service
Once you have a partition key, the Router needs to know which cell owns which key value. There are two fundamental approaches, each with meaningful trade-offs.
Lookup Table: A small, fast database (typically Redis or DynamoDB) maps org_id → cell_id. This is flexible — you can migrate tenants between cells by simply updating a row. The downside is an extra network hop per request, so it must be aggressively cached at the router level.
Deterministic Hashing: cell_id = hash(org_id) % num_cells. Zero lookups, zero latency overhead, fully stateless. The downside: you can't move a tenant between cells without rehashing everything, and adding cells changes the hash distribution (use consistent hashing to mitigate redistribution).
Most production systems use a hybrid approach: deterministic hashing for the common path, with a lookup table for exceptions — such as large enterprise tenants pinned to specific cells for compliance, capacity, or contractual reasons. This gives you the speed of hashing and the flexibility of a lookup table where you actually need it.
Step 3 — Request Routing in Practice
Here's what the full lifecycle of a request looks like through the cell router:
Request Lifecycle Through the Cell Router
Client Request: POST /api/v1/messages
Headers: Authorization: Bearer <JWT>
X-Org-ID: acme-corp-7291
│
▼
┌─────────────────── Cell Gateway ───────────────────────────┐
│ │
│ 1. Extract partition key: org_id = "acme-corp-7291" │
│ │
│ 2. Lookup cell assignment: │
│ Redis.get("cell:acme-corp-7291") → "cell-12" │
│ (cache hit, <1ms) │
│ │
│ 3. Health check: cell-12.healthy = true ✓ │
│ │
│ 4. Forward request to cell-12's ingress LB │
│ │
└────────────────────────────────────────────────────────────┘
│
▼
Cell-12 handles the request end-to-end.
Zero cross-cell communication.
Notice step 3 — the health check. The router maintains awareness of cell health and can make routing decisions in real time. This is what enables the next capability: failover.
Cell Failover and Traffic Draining
If a cell becomes unhealthy, the router has a critical decision to make. This is where naive implementations and production-grade ones diverge significantly.
Hard Failover: Re-map all tenants from the sick cell to a healthy one immediately. Simple to implement, but potentially overwhelming the target cell. It requires very careful capacity planning — every cell must be able to absorb a neighbor's full traffic at all times.
Traffic Draining + AZ Evacuation: The more sophisticated approach, used by Slack in production. Instead of a hard cut, the control plane actively redirects new connections away from the degraded cell while allowing in-flight requests to complete naturally. Tenants are gradually re-mapped. This is gentler on the receiving cells and avoids the thundering herd problem that hard failover creates.
// Simplified control plane: drain a cell
async function drainCell(cellId: string) {
// 1. Mark cell as draining — router stops sending new requests
await controlPlane.setCellState(cellId, 'draining');
// 2. Identify all tenants currently mapped to this cell
const tenants = await cellMappings.getTenantsForCell(cellId);
// 3. Reassign each tenant to the next healthiest cell
for (const tenant of tenants) {
const targetCell = await findHealthiestCell({ exclude: cellId });
await cellMappings.reassign(tenant.id, targetCell);
}
// 4. Wait for in-flight requests to drain (TTL-based)
await waitForDrain(cellId, { timeoutMs: 30_000 });
console.log(`Cell ${cellId} drained. All tenants migrated.`);
}
The 30-second drain window is long enough for most in-flight requests to complete, and short enough that users experience minimal disruption. Slack's engineering team has reported full AZ evacuations completing in under 5 minutes using this pattern.
4. Blast Radius: The Math That Sells It
Assume you have 100 cells, each serving roughly 1% of your traffic. What does a failure look like now?
| Metric | Value |
|---|---|
| Max users affected by a bad deployment (1-cell canary) | 1% |
| Slack's time to evacuate a failing Availability Zone | < 5 minutes |
| Other users affected when one tenant sends a "poison pill" | 0 |
Failure scenarios: before vs. after cells
| Failure Type | Before Cells | After Cells |
|---|---|---|
| Bad deployment | 100% of users degraded | 1% affected (1-cell canary) |
| Database corruption | All users lose data access | Only tenants in that cell |
| AZ outage | All users on that AZ degraded | Drain traffic in < 5 minutes |
| Noisy tenant | Degrades all other tenants | Only degrades their own cell |
| Poison pill request | Cascades to all services | Crashes one cell, others untouched |
This is the fundamental value proposition. Before cells, a single engineer pushing a bad config at 2am could take down the entire platform. After cells, the same event takes down 1% of users — irritating, but recoverable, debuggable, and not a company-wide incident.
"This architecture turns catastrophic, all-hands-on-deck outages into manageable, minor incidents that a single on-call engineer can handle."
Another underrated benefit: safe deployments become dramatically easier. In a traditional architecture, you canary a new release by sending 5–10% of global traffic to the new version. Any bug affects a random 10% of all customers. In a cell-based system, you deploy to one cell. All affected users are known, bounded, and can even be notified proactively. The blast radius is explicit.
5. Real-World Case Studies
Slack — Surviving AWS Availability Zone Failures
Slack repeatedly experienced a specific failure mode: degradation in a single AWS AZ would trigger retry storms across their global infrastructure. Because all microservices shared global connection pools, a partial AZ failure became a full-platform degradation within minutes.
The mechanism was insidious. A slow AZ meant slow responses from services co-located there. Clients retried. Those retries hit the same services (because of how connection pools were balanced). The retry load overwhelmed the already-degraded AZ further, which caused more timeouts elsewhere. The death spiral had no natural limiting factor.
Slack redesigned their infrastructure around what they call "Silos" — cells aligned with AWS Availability Zones. Each Silo contains a complete copy of their messaging infrastructure. Critically, they built an active control plane that continuously monitors error rates, latency, and queue depths per Silo. When a Silo starts trending bad, the system can automatically reroute workspaces to healthy Silos — a process they call AZ Evacuation.
The key engineering insight is that workspace-level granularity is the ideal partition key for Slack's use case. A Slack workspace is naturally self-contained: messages, channels, users, and files all belong to one workspace. There is almost no legitimate reason for a request scoped to workspace-A to need data from workspace-B in real time.
Outcome: Full AZ evacuation now completes in under 5 minutes with no perceived downtime for end users. Retry storms no longer cascade globally — they're contained within a Silo and starved of new traffic before they can amplify. What used to be an all-hands incident is now a quiet automated remediation.
DoorDash — Data Locality and Cross-AZ Cost Reduction
DoorDash adopted cell-based thinking for a reason that often gets overlooked in architecture discussions: money.
In their original architecture, a delivery execution had Service A in us-east-1a calling Service B in us-east-1b, which read from a database in us-east-1c. AWS charges for every gigabyte of data crossing AZ boundaries. At DoorDash's scale — millions of deliveries per day — these cross-AZ transfer fees weren't a footnote; they were a meaningful line item.
Their solution was to treat a Cell as an AZ-aligned execution unit. When a delivery is created, it gets assigned to a cell. All downstream services involved in that delivery — real-time tracking, driver assignment, payment processing hooks, notification dispatch — are routed to the same cell. This means the majority of reads and writes stay inside a single AZ, eliminating cross-AZ transfer fees for the bulk of their traffic.
The second benefit was operational clarity. DoorDash uses cell boundaries as a natural unit for capacity planning. Each cell has defined resource limits. When a new city launches or an existing market grows, they provision new cells rather than increasing shared global resource pools. This makes cost-per-market predictable — something that's genuinely hard to achieve in a shared-pool architecture.
Outcome: Significant reduction in cross-AZ data transfer costs, measurable latency improvements from data locality (fewer network hops means fewer milliseconds), and a more predictable per-market cost model that maps cleanly to business reporting.
Amazon / AWS — Shuffle Sharding: Cells Without Hard Partitions
Amazon has published detailed writing on a variant called Shuffle Sharding, which applies the blast-radius-reduction idea to stateless workloads where hard tenant-to-cell pinning doesn't make sense.
Instead of assigning each tenant to exactly one cell, shuffle sharding assigns each tenant to a random subset of workers from across the fleet — but critically, different customers get different subsets.
Imagine you have 8 workers and assign each customer to 2 of them. The probability that any two customers share the same pair of workers is only 1-in-28. So even if a poison pill request takes down 2 workers, the probability that any given other customer is affected is very low.
8 workers: [W1, W2, W3, W4, W5, W6, W7, W8]
Customer A assigned to: [W2, W5]
Customer B assigned to: [W1, W7]
Customer C assigned to: [W3, W6]
If Customer A sends a poison pill that crashes W2 and W5:
→ Customer B: unaffected (uses W1, W7)
→ Customer C: unaffected (uses W3, W6)
→ Probability of overlap for any random customer: 1/28 ≈ 3.6%
This is the approach Amazon uses in Route 53's DNS resolution infrastructure, where the stateless nature of DNS makes hard cell pinning impractical.
Outcome: Shuffle sharding delivers meaningful blast radius reduction without requiring a strict partition key or tenant-to-cell mapping. It's a particularly powerful tool for stateless workloads, edge services, and anywhere you can't commit to a hard partition strategy.
6. Implementation Playbook
You don't need to boil the ocean to get value from this pattern. Most teams implement cells incrementally, starting with two or three cells for a critical service before expanding fleet-wide.
Phase 1: Identify Your Partition Key
Look at your data model. Find the entity that naturally owns everything else — the organizational unit around which all other data pivots. For B2B SaaS this is almost always org_id or tenant_id. Draw a dependency graph and verify that the vast majority of queries can be answered within a single tenant's scope.
If you can't find a clean partition key, stop here. Cell-based architecture requires one. Trying to implement cells without a clean partition key produces a system that's complex without being resilient.
Phase 2: Build the Router
Stand up a gateway service. At minimum it needs to:
- Extract the partition key from the request (JWT claim, path param, header)
- Look up the cell assignment from a fast, cached data store
- Proxy the request to the correct cell's ingress
- Handle cell unavailability with a configurable fallback policy
The router must be extremely reliable — it's the single path all traffic flows through. This is one of the few components worth over-engineering from the start. Use a battle-tested reverse proxy as its foundation (Envoy, NGINX, or similar), and add cell-aware routing logic on top rather than building the entire proxy layer from scratch.
Phase 3: Provision Cell Infrastructure as Code
Every cell must be identical by definition. Use Terraform or Pulumi modules that accept a cell_id variable and stamp out the entire stack. Any manual drift between cells is a ticking time bomb — if Cell-7 has a subtly different Redis configuration than Cell-12, you'll discover the difference at the worst possible moment.
# Terraform: provision a new cell
module "cell_12" {
source = "./modules/cell"
cell_id = "cell-12"
region = "us-east-1"
az = "us-east-1b"
# Capacity — same for every cell
api_instance_count = 3
worker_instance_count = 2
db_instance_class = "db.r6g.large"
redis_node_count = 3
# Org IDs served by this cell
tenant_range_start = 40001
tenant_range_end = 50000
}
Adding a new cell should be a one-line change that triggers an automated provisioning pipeline. If it's a multi-day manual process, you've already failed — you won't spin up new cells when you need to.
Phase 4: Per-Cell Observability
You must be able to see the health of each cell independently. This is not optional — it's the mechanism by which the whole system pays off.
Structure your metrics and logs with a cell_id tag on every data point. Build dashboards that show all cells side-by-side. A degraded cell should stand out visually like a blinking red square in a grid of green. Your alerting should fire on per-cell error rates, not just global averages — global averages are exactly the metric that hides localized failures.
A useful heuristic: if your on-call engineer has to query logs to determine which cell is sick, your observability is not good enough.
Phase 5: Build the Tenant Migration Tooling
Before you need it in an emergency, build the tooling to move a tenant from one cell to another. The safest pattern is a dual-write with read shadowing:
Tenant Migration: dual-write pattern
Migration State: org-7291 moving from cell-04 → cell-12
Step 1: Start dual-write
All writes go to BOTH cell-04 and cell-12.
Reads still served from cell-04 (source of truth).
Step 2: Backfill historical data to cell-12.
Compare checksums until they match.
Step 3: Flip reads to cell-12.
Monitor error rates for 5 minutes.
Step 4: Stop writing to cell-04.
Update routing table: org-7291 → cell-12.
Step 5: GC old data from cell-04.
This tooling might feel like something you build "later." It isn't. You will need to move tenants when a cell becomes a hot spot, when a large enterprise customer requests data residency in a specific region, or when you're draining a degraded cell during an incident. Build it while you're calm, not while you're on fire.
7. The Cost: Trade-offs You Must Accept
Cell-based architecture is not free. The fault isolation it provides comes with real operational and engineering costs. Anyone selling this pattern without discussing the trade-offs is doing you a disservice.
You Now Manage N Production Environments
With 50 cells, you have 50 sets of databases to back up, 50 Redis clusters to monitor, and 50 Kafka consumer groups to watch. Schema migrations that used to be one operation are now 50 coordinated operations. Infrastructure-as-Code is no longer a best practice — it's mandatory for survival. If you don't have strong IaC discipline before adopting cells, you'll drown in operational toil.
The Global Data Problem
Not everything belongs in a cell. Authentication sessions, global feature flags, payment methods, and inter-tenant data either don't exist in cell-based systems or must live in the Global Control Plane.
The discipline is keeping the Global Plane as thin as possible. Every time you're tempted to add something to it, ask: "Can we replicate this to the cell edge and serve it locally?" The control plane is a shared dependency — the more it does, the more its failures affect the entire fleet.
Cross-Cell Queries Are Gone
In a traditional architecture, a query like SELECT * FROM messages WHERE created_at > NOW() - INTERVAL '1 hour' trivially joins across all users. In a cell-based system, that query needs to fan out to all 50 cells, aggregate results, and merge them — which is slow, expensive, and complex.
The standard answer is a separate analytics pipeline: Kafka → data warehouse → BigQuery or Snowflake. All cells stream their events into this pipeline asynchronously. Analytics queries run against the warehouse, not the operational databases. This is actually a better pattern for analytics regardless of cell architecture — but cells make it mandatory rather than optional.
Capacity Overhead
Each cell must be provisioned with enough headroom to absorb a neighbor's traffic during failover. With 10 cells, that's 10% overhead per cell to cover a worst-case failure. This sounds expensive, but there's a counterintuitive effect: with more cells, the proportional overhead decreases. With 100 cells, each cell only needs 1% headroom to cover a single-cell failure. More granularity means less waste.
| Challenge | Mitigation | Complexity |
|---|---|---|
| N × infrastructure to manage | Strict IaC, automated cell provisioning | High |
| Global data needs | Thin control plane + cell-edge caching | Medium |
| Cross-cell analytics | Separate async analytics pipeline | Medium |
| Tenant migration | Dual-write + shadow reads tooling | High |
| Hot spots (large tenants) | Dedicated cells for enterprise accounts | Low |
| N+1 capacity overhead | More cells = less proportional overhead | Low |
8. Is Cell-Based Architecture Right for You?
The pattern is powerful, but it's not universally appropriate. Applying it too early is premature optimization; applying it too late means surviving incidents that should have been contained.
Good Fit For:
- B2B SaaS with clear tenant boundaries
- Platforms where one AZ outage is unacceptable
- High-volume systems where blast radius is a critical concern
- Data residency or compliance requirements (e.g., EU-only data)
- Teams with strong IaC and Platform Engineering maturity
Poor Fit For:
- Early-stage startups without product-market fit
- Systems with deeply entangled cross-user queries at the core
- Small teams without dedicated platform engineering capacity
- Workloads that are inherently global (CDNs, search indices)
- Systems where no natural partition key exists in the data model
Migration Readiness Checklist
Before committing to the investment, honestly answer these questions:
- [ ] Can I identify a partition key that co-locates >90% of my related data?
- [ ] Does my team have IaC coverage of all production infrastructure?
- [ ] Do I have per-service, per-dependency observability (not just global averages)?
- [ ] Is there organizational appetite for a Platform Engineering investment of 3–6+ months?
- [ ] Have I identified which data is "global" and can't live in a cell?
- [ ] Have I sized the headroom needed for N+1 cell failover?
- [ ] Do I have tooling to migrate a tenant between cells without downtime?
If you answered "no" to more than two of these, the architecture will fight you rather than help you.
9. Conclusion
Cell-Based Architecture represents a maturation of how we think about distributed system reliability. The microservices movement gave us deployment independence and team autonomy. Cells give us something more fundamental: failure containment.
The philosophical shift is subtle but important. Traditional reliability engineering asks: "How do we prevent failures?" Cell-based architecture asks instead: "Given that failures are inevitable — and they are — how do we limit how far they spread?"
✅ The Core Insight: Cells don't reduce the probability of failure. They reduce the consequence of failure. And at scale, reducing consequences is far more tractable than eliminating causes.
For companies like Slack and DoorDash, the cost of building this infrastructure is a fraction of what a single platform-wide outage costs in revenue, customer trust, and engineer morale. For a smaller company, the calculus is different. But the pattern scales down gracefully: even starting with 3–5 cells for your most critical service can deliver meaningful blast radius reduction without requiring a full platform overhaul.
The place to start is always the same: find your partition key. Identify the natural seam in your data — the entity around which everything else pivots — and consider whether slicing along that line could save you from your next all-hands incident at 2am.
The bulkheads are there to build. The question is whether you'll build them before or after the hull is breached.
If you found this useful, the next logical reads are Slack's engineering blog on Silo architecture, Amazon's shuffle sharding whitepaper, and the original Netflix Bulkhead pattern writeup. Each covers a distinct flavor of the same core idea — and seeing all three together gives a much richer picture of the design space.
Top comments (0)