Leader - Follower Architecture
Imagine you are the owner of a social media company. You know that database read access is very high, with billions of users scrolling through posts and feeds. On the other hand, the number of users adding posts is comparatively low, meaning write access is much less frequent. Because your platform is used worldwide, minimising latency is essential. One way to achieve this is by using a leader-follower architecture. Here, the leader database handles all write requests (e.g., users posting new content), while multiple follower databases replicate the leader in real time and serve all read requests (e.g., users scrolling). The followers keep themselves up to date with the leader database (they update their data with leaders) with minimal latency.
In the case that the leader crashes or becomes inactive, you may need to block write requests temporarily. However, your design should allow users to continue sharing posts without long disruptions. To achieve this, a follower can take over the leader’s role through a process called leader election.
Electing a Leader
Several algorithms facilitate leader election among multiple instances or processes. A good leader election algorithm should ensure the application's safety and liveness, meaning there should be at most one leader at any time, and the election process should complete even in cases of failure. A popular algorithm for leader election is the Raft algorithm, but I’ll leave a detailed discussion on Raft for another blog post. Many practical implementations avoid complex algorithms to reduce external dependencies and instead use simpler methods, which I'll explain here.
Practical Scenarios
You would typically use a fault-tolerant key-value store to elect a leader among candidates (processes competing to be the leader that will receive the write requests for your social media platform). By "fault-tolerant," we assume the system is always active and consistent, with no errors occurring.
There is a key ( K ) with an initial value ( V_o ). During the election, all processes will try to change its value from ( V_o ) to some new value ( V_n ). If the value of ( K ) in the store matches the current value ( V_o ), it can be changed to ( V_n ). The process that successfully makes this change becomes the leader and will continue to update the value. If the leader fails, another process should have a chance to become the leader and take responsibility (to accept write requests). To enable this, we set an expiry time or Time To Live (TTL) for the key. After this period, the value of ( K ) expires, allowing all active processes to try to change it again using a compare-and-swap mechanism. The compare-and-swap mechanism involves:
- Compare: Verify if ( V_o ) is the same as the current value.
- Swap: Change ( V_o ) to ( V_n ).
Thus, one process among all candidates locks the key ( K ) and becomes the leader. Locking is essential because, without it, more than one process might update the key ( K ) simultaneously. However, locking alone doesn’t guarantee a successful election. Imagine the locking mechanism as follows:
if lease.acquire():
try:
content = store.read(filename)
new_content = update(content)
store.write(filename, new_content)
finally:
lease.release()
The process locks the key ( K ) (or the file containing ( K )), then reads and writes to it. However, there may be a problem if the process locks and reads the key, but the OS slows down during processing. Before the write operation completes, the lock time for the key or file might expire. To address this issue, the process writing to the file should compare the lock expiration time with the local time (the difference between lock start time and write time). The lock expiration should be sufficiently far from the write time to ensure only one candidate becomes the leader without a race condition.
This approach only works if clocks are synchronized, which is not always feasible. To handle this, we can assign a version number to each file, incrementing it each time the file is updated. The process holding the lock can read the file and its version number from the file store, perform some local computation, and update the file (and increment the version number) only if the version number remains unchanged. The process can perform this validation atomically using a compare-and-swap operation, supported by many file stores.
Conclusion
The leader-follower architecture, combined with an efficient leader election mechanism, ensures a robust design that minimizes downtime for your social media platform, supporting high scalability and availability for a global user base.
Here are links to my previous posts on distributed systems:
- Building Resilient Applications: Insights into Scalability and Distributed Systems
- Understanding Server Connections in Distributed Systems
- How are your connections with web secure and integral?
- Understanding System Models in Distributed system
- Ping & Heartbeat in distributed systems
- How Real-Time Editing Works: Understanding Event Ordering in Distributed Systems
Feel free to check them out and share your thoughts!
Top comments (0)