DEV Community

Cover image for Tashi Concensus Engine (TCE) — High-Throughput, Low-Latency, Fair Ordering & the Doors it Opens in the Cloud
TashiGG
TashiGG

Posted on

Tashi Concensus Engine (TCE) — High-Throughput, Low-Latency, Fair Ordering & the Doors it Opens in the Cloud

Distributed systems are becoming increasingly important as the need for scalable and highly available applications grows — especially with the rise of cloud computing . The CAP theorem forces us to choose between consistency, availability, and partition tolerance. This is a problem all distributed systems face — databases like MongoDB, caches like Redis, message queues like Kafka, and scheduler like Kubernetes — and Tashi could make all of them better.

CAP theorem tradeoffs between different products

CAP theorem tradeoffs between different products

It all comes down to one fundamental issue - reaching consensus between different computers. Many algorithms have been proposed to solve this problem, with Raft being one of the most popular (seriously, it’s pretty much everywhere). However, Raft has some limitations, especially in terms of performance and scalability, which can make it a less-than-ideal choice for certain use cases. Its leader-based nature both reduces its availability and caps its throughput to the horsepower of a single server.

Enter the Tashi Consensus Engine (TCE), Tashi’s proprietary implementation of a consensus algorithm that promises to improve upon Raft in a number of ways. TCE is fully asynchronous, leaderless (throughput grows linearly with the number of nodes, with minimal impact on latency), and designed for high-throughput and fair ordering, making it an attractive alternative for pretty much all applications outlined above, as well as Tashi’s first market offering — Unity TNT, bringing together the best of P2P and dedicated game servers.

How TCE Works

TCE is a consensus algorithm that achieves high-throughput and fair ordering by allowing multiple nodes to make progress in parallel, rather than relying on a single leader to coordinate all activity. This approach eliminates the performance bottlenecks that can arise with leader-based algorithms like Raft, making TCE well-suited for high-volume workloads.

Additionally, TCE is fully asynchronous, meaning that it can continue to make progress even when network delays or failures occur. This robustness makes it a good choice for use cases like distributed message queues, where the ability to process messages even in the face of network partitions is critical.

Improvements Over Raft

One of the biggest advantages of TCE over Raft is its scalability. TCE can handle more nodes and higher volumes of data than Raft, making it a better choice for use cases that require high levels of concurrency and parallelism. This scalability is achieved through the use of a leaderless design, which eliminates the need for a central coordinator, freeing up resources for processing data.

Another key advantage of TCE is its fair ordering. In Raft, the leader is responsible for ordering all requests, which can result in unfairness in the event of network delays, failures, or problematic network topology (a nodes not being equidistant from the leader/server). TCE eliminates this problem by allowing multiple nodes to make progress in parallel, ensuring that all requests are processed fairly, even in the face of failures.

Finally, the TCE delivers an increase in robustness. In any two-phase commit system with a single leader. A system failure of the leader results in 100% uncommited data loss. In the case of the leaderless TCE, only 1/n data is lost, with n being the number of nodes in the system.

Use Cases for TCE

TCE is already being explored and implemented for a variety of use cases — with the gracious help of the the Scale tier of Microsoft’s Founders hub, and our friends at the company — including distributed message queues like Kafka, schedulers like Kubernetes and Nomad, databases like MongoDB and CockroachDB, and caches like Memcached and Redis. The TCE is also being used as the backbone for Unity TNT (Tashi Network Transport), a multiplayer offering that leverages the high-throughput and fair ordering capabilities of TCE to provide low-latency, high-performance network transport for multiplayer games and other applications.

TL;DR

The Tashi Consensus Engine (TCE) is a new consensus algorithm that offers a number of improvements over Raft, making it a better choice for use cases that require high throughput, strict fairness requirements, as well as highly robust systems. Whether you're building a distributed message queue, scheduler, database, cache, or multiplayer application, TCE is worth exploring as a potential solution for your needs (if we don’t get there first). Shoot us a message if you’re interested in exploring a partnership.

Top comments (0)