DEV Community

Cover image for Consistency Patterns (Strong, Eventual, Weak) in System Design
CodeWithDhanian
CodeWithDhanian

Posted on

Consistency Patterns (Strong, Eventual, Weak) in System Design

Understanding Consistency in Distributed Systems

In distributed systems, consistency defines how and when updates to data become visible across multiple nodes or replicas. When a client performs a write operation on one node, the system must decide whether subsequent read operations on any other node will immediately reflect that change or tolerate some delay. This decision directly influences availability, latency, throughput, and overall system behavior under network partitions or failures.

Consistency patterns provide structured guarantees that help architects balance these competing requirements. The three primary patterns—Strong Consistency, Eventual Consistency, and Weak Consistency—form a spectrum from the strictest guarantees to the most relaxed. Each pattern addresses different real-world demands, from financial accuracy to massive-scale user-generated content.

Strong Consistency

Strong Consistency, also known as linearizability, guarantees that once a write operation completes successfully, every subsequent read operation—regardless of which replica or client issues it—will return the most recent write value or a newer one. There is no window for stale data. All operations appear to execute in a single, global, sequential order as if on a single atomic copy of the data.

This pattern enforces immediate visibility of updates. If Client A writes a value and receives confirmation, Client B reading immediately afterward will always see the updated value. The system achieves this through tight synchronization mechanisms such as quorum-based replication, consensus protocols like Paxos or Raft, or two-phase commit (2PC) for distributed transactions.

How Strong Consistency Works in Practice

A typical architecture uses a primary node (or leader) that coordinates writes. When a write arrives:

  1. The primary applies the change locally.
  2. It replicates the update synchronously to a quorum of replicas (for example, a majority of nodes in a five-node cluster requires acknowledgment from at least three).
  3. Only after the quorum confirms does the primary acknowledge success to the client.
  4. Any read request, whether directed to the primary or a replica, is served only after verifying it reflects the latest committed state.

This ensures linearizability but introduces latency because writes block until synchronization completes. Failures or network partitions can temporarily reduce availability until consensus is restored.

Complete Implementation Structure and Code Example

Consider a simplified Python simulation of a strongly consistent key-value store using threading locks and a central coordinator to mimic quorum behavior. This demonstrates the core structure used in production systems like Google Spanner or etcd.

import threading
import time
from typing import Dict, Optional

class StrongConsistencyStore:
    def __init__(self, num_replicas: int = 3):
        self.data: Dict[str, str] = {}          # Shared data store
        self.versions: Dict[str, int] = {}      # Version tracking for linearizability
        self.lock = threading.Lock()            # Global lock simulating quorum coordination
        self.replicas = num_replicas
        self.quorum = (self.replicas // 2) + 1  # Majority quorum

    def write(self, key: str, value: str) -> bool:
        with self.lock:  # Simulate synchronous quorum acknowledgment
            current_version = self.versions.get(key, 0)
            self.data[key] = value
            self.versions[key] = current_version + 1
            print(f"Write committed: {key} = {value} (version {self.versions[key]})")
            # In real systems, this would wait for quorum replicas to acknowledge
            time.sleep(0.05)  # Simulate network round-trip latency
            return True

    def read(self, key: str) -> Optional[str]:
        with self.lock:  # All reads go through coordinated check
            if key in self.data:
                value = self.data[key]
                version = self.versions[key]
                print(f"Read returned: {key} = {value} (version {version})")
                return value
            return None

# Usage demonstration
if __name__ == "__main__":
    store = StrongConsistencyStore()
    store.write("account_balance", "1500.00")
    print("Client A writes balance")
    # Immediate read from any "replica" (simulated)
    print("Client B reads balance:", store.read("account_balance"))
    store.write("account_balance", "1400.00")
    print("Client C reads updated balance:", store.read("account_balance"))
Enter fullscreen mode Exit fullscreen mode

This code enforces strong consistency by serializing all operations under a single lock, ensuring every read sees the latest write. In a real distributed deployment, the lock would be replaced by a consensus algorithm that requires quorum acknowledgments from live replicas. The version counter prevents stale reads even if network delays occur.

Use Cases for Strong Consistency

Strong Consistency is essential in domains where correctness cannot be compromised:

  • Financial systems and banking applications where account balances must reflect every transaction instantly to prevent double-spending.
  • Inventory management in e-commerce to ensure stock counts are accurate across all regional warehouses.
  • Reservation systems such as airline seat booking or hotel room allocation.

Trade-offs of Strong Consistency

While it provides perfect data accuracy, Strong Consistency sacrifices availability during partitions (per the CAP theorem) and increases latency due to synchronization overhead. Systems may return errors or block operations rather than serve potentially inconsistent data.

Eventual Consistency

Eventual Consistency relaxes the immediate guarantee. It promises that if no new writes occur to a data item, all replicas will eventually converge to the same latest value after some unspecified but finite time. Temporary divergence is allowed, and reads may return stale data during the propagation window.

This pattern relies on asynchronous replication. Writes are acknowledged quickly after being applied to a single node or a minimal set, then propagated in the background through mechanisms like gossip protocols, anti-entropy, read repair, or hinting handoff.

How Eventual Consistency Works in Practice

A write succeeds as soon as it reaches one or more nodes (often a single node for maximum availability). Background processes then push the update to other replicas. Conflict resolution strategies—such as last-write-wins (LWW) based on timestamps or vector clocks—resolve any concurrent updates when replicas reconcile.

Clients may see different values briefly, but the system self-heals without manual intervention.

Complete Implementation Structure and Code Example

Below is a full Python implementation simulating an eventually consistent store using asynchronous queues and background reconciliation threads. This mirrors the architecture of Amazon DynamoDB (default mode) or Apache Cassandra.

import threading
import time
import queue
from typing import Dict, Optional

class EventualConsistencyStore:
    def __init__(self, num_replicas: int = 3):
        self.replicas: Dict[int, Dict[str, str]] = {i: {} for i in range(num_replicas)}
        self.versions: Dict[int, Dict[str, int]] = {i: {} for i in range(num_replicas)}
        self.write_queue = queue.Queue()
        self.replica_count = num_replicas
        self.lock = threading.Lock()
        # Start background propagation thread
        self.propagator = threading.Thread(target=self._propagate_updates, daemon=True)
        self.propagator.start()

    def write(self, key: str, value: str, replica_id: int = 0) -> bool:
        with self.lock:
            current_version = self.versions[replica_id].get(key, 0)
            self.replicas[replica_id][key] = value
            self.versions[replica_id][key] = current_version + 1
            self.write_queue.put((key, value, self.versions[replica_id][key]))
            print(f"Write acknowledged on replica {replica_id}: {key} = {value}")
            return True

    def read(self, key: str, replica_id: int = 0) -> Optional[str]:
        value = self.replicas[replica_id].get(key)
        version = self.versions[replica_id].get(key, 0)
        print(f"Read from replica {replica_id}: {key} = {value} (version {version})")
        return value

    def _propagate_updates(self):
        while True:
            try:
                key, value, version = self.write_queue.get(timeout=1)
                with self.lock:
                    for rid in range(self.replica_count):
                        current_version = self.versions[rid].get(key, 0)
                        if version > current_version:
                            self.replicas[rid][key] = value
                            self.versions[rid][key] = version
                time.sleep(0.2)  # Simulate network propagation delay
            except queue.Empty:
                continue

# Usage demonstration
if __name__ == "__main__":
    store = EventualConsistencyStore()
    store.write("user_post", "Hello world", replica_id=0)
    print("Client A reads immediately (may be stale):", store.read("user_post", replica_id=1))
    time.sleep(1)  # Allow propagation
    print("Client B reads after convergence:", store.read("user_post", replica_id=1))
Enter fullscreen mode Exit fullscreen mode

The background thread ensures eventual convergence. In production, vector clocks or CRDTs would replace simple version numbers for more sophisticated conflict handling.

Use Cases for Eventual Consistency

Eventual Consistency excels in high-scale, high-availability scenarios:

  • Social media feeds and like counters where slight delays in visibility are acceptable.
  • Content delivery networks (CDN) and caching layers.
  • DNS systems and email propagation.

Weak Consistency

Weak Consistency provides the fewest guarantees. After a write, subsequent reads may or may not see the update, and there is no assurance that replicas will ever converge. Divergence can persist indefinitely unless explicitly resolved.

This model prioritizes raw performance and availability above all else. Updates are fire-and-forget, with no automatic synchronization or conflict resolution built into the core protocol.

How Weak Consistency Works in Practice

Writes are applied locally to whichever node receives them. Reads return whatever local state exists at that moment. Reconciliation, if any, happens only through application-level logic or manual intervention. There are no background propagators or quorum requirements.

Complete Implementation Structure and Code Example

Here is a complete Python simulation of a weakly consistent store with zero synchronization:

import threading
from typing import Dict, Optional

class WeakConsistencyStore:
    def __init__(self, num_replicas: int = 3):
        self.replicas: Dict[int, Dict[str, str]] = {i: {} for i in range(num_replicas)}
        self.lock = threading.Lock()  # Only for internal safety, not for consistency

    def write(self, key: str, value: str, replica_id: int = 0) -> bool:
        with self.lock:
            self.replicas[replica_id][key] = value
            print(f"Write applied only to replica {replica_id}: {key} = {value}")
            return True

    def read(self, key: str, replica_id: int = 0) -> Optional[str]:
        value = self.replicas[replica_id].get(key)
        print(f"Read from replica {replica_id}: {key} = {value} (no convergence guarantee)")
        return value

# Usage demonstration
if __name__ == "__main__":
    store = WeakConsistencyStore()
    store.write("game_score", "150", replica_id=0)
    print("Client reads from replica 1:", store.read("game_score", replica_id=1))  # Likely None
    store.write("game_score", "200", replica_id=2)
    print("Client reads from replica 0:", store.read("game_score", replica_id=0))  # Still 150
Enter fullscreen mode Exit fullscreen mode

No propagation occurs. Each replica remains isolated unless the application adds custom logic.

Use Cases for Weak Consistency

Weak Consistency suits applications where freshness is secondary to speed:

  • Multiplayer game leaderboards where occasional staleness does not affect gameplay.
  • Non-critical caching layers or analytics dashboards.
  • Highly partitioned sensor networks that tolerate data loss.

Comparing the Patterns

Strong Consistency delivers immediate correctness at the cost of higher latency and reduced availability during failures. Eventual Consistency trades immediate accuracy for superior scalability and availability, relying on time for convergence. Weak Consistency maximizes performance by removing all guarantees, making it suitable only when temporary or permanent divergence is tolerable.

Each pattern aligns with specific positions on the CAP spectrum: Strong Consistency typically favors CP systems, while Eventual and Weak Consistency enable AP designs that remain responsive even under partitions.

The choice depends entirely on the application's tolerance for stale data, required throughput, and acceptable failure modes.

Consistency patterns in system design
System Design Handbook

Master the complete System Design interview process with real-world architectures, deep-dive explanations, and battle-tested patterns used at top tech companies.

Buy it now at: https://codewithdhanian.gumroad.com/l/ntmcf

Buy me coffee to support my content at: https://ko-fi.com/codewithdhanian

Top comments (0)