loading...

Part 1: A Primer on The Scalability Test and Radix

radixdlt profile image Radix ・7 min read

This is the first part of a two-part series on how we built and deployed a test that pushed the entire transaction history of Bitcoin across the Radix ledger in less than 15 minutes.

What is this test?

These tests replay the entire 10 years of Bitcoin’s transaction history on the Radix ledger, with full transaction and signature validation, on a network of over 1,000 Nodes distributed evenly throughout the world.

For the first time since the creation of public, trustless networks, we have demonstrated a technology that can truly support even the world’s most demanding transactional applications.

What are these tests demonstrating?

That technology for the transfer and ownership of value, without a central authority, can exist at the same scale as the protocol that the internet is based on.

The mission of Radix is to give anyone, anywhere friction free access to the digital economy. To do this, it must be able to service the world without crippling costs or technical bottlenecks.

This essentially means a stateful protocol that can work for over 7.5 billion people and 500 billion devices simultaneously. We built Radix to do exactly this, and to prove it we are running a series of throughput tests starting this week. These tests are built to simulate real-world conditions, with full transaction validation and without cutting any corners.

How does this compare to what has come before?
With the advent of the internet came the advent of digital commerce. Since then the world has needed an increasingly larger transactional throughput just to keep up with the needs of global and increasingly connected citizens:

Early blockchain protocols broke this progression towards platforms that could function for an increasingly interconnected world. Radix provides a platform on which the next generation of digital-first companies can be built, and that can scale to every single person in the world.

What sort of use case requires this kind of throughput?

Few individual use cases require that level of throughput, but as the throughput of a public ledger is shared by every single application built on top of it; the cumulative throughput capacity is key.

The simplest single use case for something of this scale and scope would be the issuance and use (domestic + international, consumer + enterprise + government) of the money of a nation.

Such a system would remove the need for services such as Paypal, Visa, and Mastercard, as well as much of the back end systems that banks use today.

Although the use case of money is only the very simplest of financial applications that can be built on Radix, it also forms the foundation of both economies and financial products, all of which can also be built more easily once money itself is programmable.

To learn more about fiat token/digital cash issuance on Radix, please see our knowledge base.

What dataset are you using to simulate this?

For the first runs, we are testing the throughput of the Radix network using a verifiable data source that we have a lot of love and respect for the Bitcoin ledger transaction history.

We picked the Bitcoin dataset because it is, like Radix, based on the UTXO transaction model, which we can convert to Radix transactional entities (Atoms). For the duration of the test, anyone can search for their accounts and confirm their transaction history matching the real BTC ledger.

We also liked it because there are 460 million bitcoin addresses; which is equivalent in number to the population of a large nation.

Is this the maximum TPS Radix is capable of?
This is by no means the maximum throughput of our platform, but it is definitely stretching it much further than we ever have tried before.

As our scalability is based on sharding, the more shards the higher the transaction throughput possible. As Radix has a [fixed shard space of 18.4 quintillion shards[(https://www.radixdlt.com/post/sharding-in-radix/), the maximum theoretical throughput is way more than could ever be used; even by the entire world.

What does this blog cover?
This blog covers what we did to set up these tests; plus how we got the Radix ledger to do full signature and UTXO validation of the entire Bitcoin transaction history, in less than 30 minutes.

How big is the network?
The first run of these tests concentrates on speed, rather than fault tolerance. As a result, the network consists of approximately 1,000 nodes, with minimal overlap; with each node servicing approximately 1/1,000th of the total ledger.

The Radix consensus does not rely on active global consensus (POW/POS) but a form of shard level passive consensus based on the progression of logical time. The lack of overlap does not mean that transactions are not being correctly validated, but it does prevent the network from being able to deal with significant Node dropout in this configuration.

Should anyone wish to test the fault tolerance of the system by increasing the overlap on our test network, you can spin up your own version of the ledger from our test code on Github. We will also be testing this in the future, but it requires us to continuously request Google to give us enough nodes to test it with!

Radix throughput test code: https://github.com/radixdlt/mtps

On Radix, a node with 8GB of RAM and 4 cores can process approximately 2,000 transactions per second, including full validation and gossiping. For this test, we needed some extra RAM to be able to process and cache the Bitcoin dataset, which raised the RAM requirements to 30GB and 8 cores, but is not representative of main-net requirements.

What are the limitations?
Redundancy in this test is configured using “shard groups” – the network has a fixed shard space of 18.4 quintillion shards and a Node can operate as much or little of the shard space as it likes (assuming it has enough resources). We spread the Nodes out in the shard space using “shard groups” – the smaller the shard groups, the larger the amount of shard space the node is covering. E.g. 1 shard group = 18.4 quintillion shards/100% of the ledger. 2 shard groups = 50% of the ledger per group etc. The more nodes per group, the greater the redundancy – e.g. 100 nodes + 2 shard groups would mean 49 node redundancy per group.

For this test, we are running the network at low redundancy to get the most bang for our buck on Google Cloud. This means approximately 1,000 shard groups for 1,000 nodes. These shard groups overlap a little, but not by a huge amount. Each transaction touches on average 4.2 shards, meaning each transaction is validated and checked for double spends by an average of 4.2 nodes.

In future tests, we will reconfigure the network to have increased redundancy, which will, therefore, have a lower maximum throughput on the network for the same node count. The fundamental limitation is how much money we are willing to spend on running these tests.

Do you detect bad blocks?
There are no blocks or mining on Radix – all Atoms (transactions/on ledger operations) are submitted and checked individually and are determined to be either valid or invalid on a per transaction basis (UTXO double spend check, signature validation, etc.).

Because Radix state sharding has similar properties to Bitcoin’s UTXO model system (with the addition of smart contract like functionality), applying the Bitcoin transaction history with transaction validation and checks for double spends is relatively simple for us to hack into Radix – with the exception of non-standard Bitcoin scripts; where we had to get a bit more inventive – see the millionare-dataset-preparator tool for more details.

How do you stop a double spend?
Transactions are individually validated – this is done using a combination of the Radix consensus layer (Tempo) and the programmable system of constraints that we can add using the Atom Structure and the Constraint Machine. Together these are able to strictly order related transactions (e.g. from the same private key) and drop double spends.

To understand how this works in a bit more detail; please see our [explainer video series here[(https://www.youtube.com/watch?v=sW8nWeUnkK0&list=PLBGHv3uedRNTBeJNq90p-Ph3Yuc7imH-r).

The natively sharded structure of the Radix ledger is essential. Because the shard space is fixed and will never change, even once there are a very large number of people using the network, then it can also be used to help partition transactions and load balance the network.

The main way this is done is via the public key of a wallet. On Radix, the public key of any address also tells you which shard it lives on. This has the very desirable property of automatically grouping together related transactions (all spends from the same key must happen on the same shard) and ungrouping unrelated transactions (two keys have a 1/(2^64) chance of being on the same shard).

This means a node does not need to know about the whole ledger to check the validity of a specific spend; just the shard the key lives on. This is why we can do massively asynchronous processing of everything from application messages to Bitcoin transactions on Radix.

A note on the Bitcoin dataset
The Bitcoin fee model incentivizes grouping together as many transactions as possible in the same block. The Radix fee model will dis-incentivize this (we don’t have blocks). In this regard – although we can achieve a high transaction per second throughput on this data, the bitcoin dataset is not optimized for the Radix data architecture.

For future tests we will be using a more traditional 1-to-1 transactional datasets from financial institutions and crypto-exchanges. This data will produce a friendlier dataset more aligned with the Radix architecture and better represent the vast majority of transactions we will be seeing on the Radix network.

Join The Radix Community

Telegram for general chat
​Discord for developers chat
​Reddit for general discussion
Forum for technical discussion
Twitter for announcements
​Email newsletter for weekly updates
Mail to hello@radixdlt.com for general enquiries

Discussion

pic
Editor guide