DEV Community

Phuong Le
Phuong Le

Posted on • Edited on • Originally published at blog.devtrovert.com

CAP Theorem: Why Perfect Distributed Systems Don't Exist?

CAP Theorem

CAP Theorem (source: blog.devtrovert.com)

Author

I typically share insights on System Design & Go at Devtrovert. Feel free to check out my LinkedIn Phuong Le for the latest posts.


Imagine a web service that replicates its data across data centers in New York and Los Angeles. An East Coast user accesses the New York data center and makes a write, so now a West Coast user tries to read data from the LA data center.

According to CAP:

  • C would mean the LA user sees the latest write from the NY user.
  • A would mean the LA data center returns something to the user immediately, even if inconsistent.
  • P means the web service continues working if the NY-LA link goes down.

We’re going to discuss the CAP Theorem, the theory that explains these kinds of decisions. Don’t worry, we’ll keep it short and easy to understand since it’s mostly theory

What does CAP stand for?

CAP Theorem is an essential guideline for how distributed storage systems, like databases, should work. Before we get into the details, let’s first explain what each letter in CAP stands for:

Consistency
In a consistent system, all nodes see the same data at the same time. To put it another way, when new data is added to one node, that update has to be quickly shared with all the other nodes in the system.

CAP Theorem — C: Consistency (source: blog.devtrovert.com)

CAP Theorem — C: Consistency (source: blog.devtrovert.com)

This means that a write operation is only counted as complete when all nodes in the system are updated.

Availability
The idea behind availability is simple: the system should always be ready to respond to requests, I mean every read or write request should get some sort of reply, making sure the system stays functional and easy to interact with.

CAP Theorem — C: Consistency (source: blog.devtrovert.com)

CAP Theorem — A: Availability (source: blog.devtrovert.com)

But it’s good to remember that the data you get back might not always be the latest and what matters is that all working nodes in the system will give you a valid response, even if some nodes are experiencing issues.

Partition Tolerance
Partition Tolerance is what allows a system to keep running, even when things go wrong such as lost messages, or nodes that shut down.

In simple terms, it’s the system’s way of managing disruptions in communication between nodes, something that’s pretty much bound to happen in any distributed system.

CAP Theorem — P: Partition Tolerance

CAP Theorem — P: Partition Tolerance (source: blog.devtrovert.com)

That’s why partition tolerance is a “must-have” factor.

CAP Theorem

a distributed system can manage to meet only 2 out of the 3 key conditions at the same time, but never all three. You usually have to pick between Consistency or Availability, but let me tell you, Partition Tolerance is generally a “must-have”.

“What’s the big deal with Partition Tolerance, anyway?”

First off, let’s break down what ‘network partition’ is.

It’s what happens when communication between nodes in the system isn’t smooth. In a situation like that, it’s tough to say whether a node is totally out of the game or facing some other issues.

Because these communication problems are bound to happen, it’s crucial to have a system that can keep running. Nobody wants their whole system to just stop working, right?

Why only 2?

In a real-world scenario, you can achieve either partition tolerance with consistency, or partition tolerance with availability, but not all three at once especially when there’s a network issue.

Example

Let’s say I’m dealing with a situation where a network partition happens and node A isn’t responding or is delayed. It’s now out of sync with nodes B, C, and D and in such cases, it needs time to catch up and get back to having the latest data, right?

Choose Consistency

So if I prioritize consistency, I wouldn’t route the requests to node A for now (The system should return errors or reject requests intended for Node A).

We want all nodes to show the same data simultaneously because that’s what consistency is all about. The trade-off here is availability, node A won’t respond to client requests during this time.

  • Latency: Ensuring consistency adds latency as writes have to propagate to all replica nodes before a write can be considered complete and this makes responses slower.

  • Reduced throughput: Due to waiting for write propagation and acknowledgement from all nodes, the overall system throughput gets reduced under higher consistency.

  • Lower availability: During a network failure between nodes, consistent systems have lower availability as some requests get rejected or timed out to prevent inconsistency.

  • Increased cost: Extra resources and infrastructure needed to ensure timely propagation and coordination between nodes adds cost.

Choose Availability

On the other hand, if availability is more important, node A will keep handling client requests and show the most recent data it has.

The drawback is that this data might not match what’s on nodes B, C, and D, which breaks the rule of consistency:

  • Inconsistency: With high availability, reads may return stale or inconsistent data if an update hasn’t propagated to all nodes yet and this violates consistency.

  • Conflicting writes: Higher risk of conflicting/overwritten writes since writes are allowed asynchronously without coordination.

  • No error handling: Focus on responding to all requests so errors and exceptions cannot be handled properly.

  • Unbounded staleness: Typically no limit on how stale the data returned can be in an available system, that’s why some reads could return very old data.

  • Difficult conflict resolution: Resolving conflicting writes is more difficult without strong coordination.

  • Reduced data integrity: It becomes harder to enforce constraints and maintain integrity without consistent reads/writes and data can become corrupted.

So, which one’s your match?

Different storage systems prioritize different aspects of the CAP Theorem, let me break it down:

  • CP (Consistency, Partition Tolerance): Databases like HBase and BigTable prioritize having up-to-date data across all nodes, they’re willing to give up some availability.

  • CA (Consistency, Availability): Largely theoretical, most real-world systems need to be able to handle network issues, which means they need partition tolerance. But single-node databases, like traditional RDBMS, can fit into this category.

  • AP (Availability, Partition Tolerance): Options like Cassandra and Couchbase go for this option, they aim to keep running smoothly even when consistency across nodes might lag.

“What’s the deal? CP or AP, which one should I actually choose?”

If you’re working with important data like financial records, you’d want all nodes to show the same information, right? Imagine one node saying your balance is $1000 while another says it’s $50. That wouldn’t be good, so, in cases like this, going with CP makes the most sense.

But if you care more about smooth user experience and your application deals mostly with reading data, then AP is a better fit and this is especially true when the data isn’t critical and it’s okay if it’s a little out of date.

Note from Author

In systems like MongoDB, DynamoDB or ScyllaDB, it’s not a one-size-fits-all situation, these systems are designed to be flexible. It means you can choose strong consistency for critical operations and more relaxed settings where fast response is more important.

This way, you don’t have to stick with just one approach for your entire system, different parts can operate at different levels of consistency or availability, depending on the requirements.

Top comments (0)