DEV Community

Cover image for Caribou2: Distributed Data Favoring FPGAs
librehash
librehash

Posted on

Caribou2: Distributed Data Favoring FPGAs

URL = http://www.vldb.org/pvldb/vol10/p1202-istvan.pdf

Extremely important for us to favor FPGAs in the creation of this protocol because it will allow (in my opinion) for the bridging of Proof of Work from its primary functionality (i.e., idiomatically searching for a particular nonce value that, when hashed with pre-prepared data by a miner within a 'block template', will evaluate to a hexadecimal output that is below the difficulty target).

This is preferable because it allows us to begin picking at the nagging issue of Proof of Work being a 'wasteful' exercise (in the grand scheme; maybe not so if evaluated from the meta-philosophical perspective that states that the resources expended in calculating these nonce values by the network is not wasteful because their use in fortifying Bitcoin's security is purposeful enough to justify its own activity, even if the computations being performed cannot be usefully ported for any other purpose).

Revolution in Thought About Mining Architecture

There are many that have scorned ASIC chips since their inception for various reasons.

Many of those reasons, however, rely on arbitrary ill-defined and subjective 'moral' / idealistic values that create an excessive amount of cognitive dissonance when juxtaposed next to the amoral, apolitical context that Bitcoin was designed to fit within.

To quote Satoshi Nakamoto in an e-mail response sent through the Metzdowd e-mail list in response to a question posed by Hal Finney following the project's initial introduction via whitepaper, the consensus of Bitcoin is inherently apolitical because:

"It is strictly necessary that the longest chain is always considered the valid one. Nodes that were present may remember that one branch was there first and got replaced by another, but there would be no way for them to convince those who were not present of this. We can't have subfactions of nodes that cling to one branch that they think was first, others that saw another branch first, and others that joined later and never saw what happened. The CPU power proof-of-work must have the final say. The only way for everyone to stay on the same page is to believe that the longest chain is always the valid one, no matter what."

The above statement from Satoshi Nakamoto is a powerful one because it provides the driving principle for Bitcoin's sustenance up to this point.

Therefore, any and all changes, amendments, modifications, and/or augmentations to the Proof of Work design must be done in a careful manner to ensure that the original design goal of its implementation within Bitcoin is not eroded because if that does happen - then the resulting creation will likely be something that does not possess the necessary properties of a blockchain.

Purpose For the Exposition Above

Everything written above was done so with the intent of illuminating the fact that there are no logical footholds for claims such as the following:

  1. "ASIC miners ought to be bricked because there are too few ASIC mining companies mining on major cryptocurrencies such as Bitcoin, Litecoin, etc., which means that the mining ecosystem itself will eventually become centralized"

  2. "ASIC miners are to be loathed because they ultimately represent a reality in which only very large-scale, multi-national corporate conglomerates with estimated valuations in excess of a billion dollar are able to get a 'slice of the pie'"

  3. "ASIC mining presents a threat of some sort to the sustainability / security / viability of Bitcoin, therefore it ought to be bricked"

There are plenty of other arguments that have been made (and continue to be made) for why developers of cryptocurrency protocols should act with intent to 'brick' ASIC miners or thwart ASIC manufacturers.

Many of these arguments, on their face, espouse seemingly favorable, egalitarian viewpoints that depict a future outlook that most would consider to be favorable. Couple that with an idealistic mining ecosystem that is substantially less damaging to the global environment by orders of magnitude and its downright easy to be persuaded that "bricking" ASIC miners is the way to go.

However, this project sways away from that argument for a few specific reasons.

The ultimate (most important) reason for straying from this strategy is not because of any philosophical opposition to the ideas espoused within anti-ASIC arguments, but rather that those ideas (within the context of blockchain) are entirely arbitrary and manifested (while having nothing to do with the underlying functioning of the protocol itself).

To demonstrate this point, we will dissect a few of the foe anti-ASIC arguments that were made above.

Countering a Few Common Anti-ASIC Arguments

Argument One: "ASIC miners ought to be bricked because there are too few ASIC mining companies mining on major cryptocurrencies such as Bitcoin, Litecoin, etc., which means that the mining ecosystem itself will eventually become centralized":

Response / Rebuttal: There's no justifiable basis for why we should be concerned or even care about the 'centralization' of miners. There is nothing in the Bitcoin whitepaper or its source code that alludes to the parity of mining pools on the network. In fact, based on the writings of Satoshi and other relevant protocol documentation, it appears that the implicit assumption is that the mining ecosystem will situate itself by virtue of the Proof of Work protocol (by design; specifically referencing the adjusting difficulty targeting in accordance with fluctuating hash rate) complementary to the value in bitcoins being produced as a block reward (minus any and all sunk costs and operating expenses).

Its worth noting that Satoshi Nakamoto does not use the word 'decentralization' once in the whitepaper, which should tell us that this is more or less a means to an end. Additionally, the computations that are being performed by miners are easily auditable. In fact, the brilliance of the 'Proof of Work' scheme is that, despite the unfathomable amounts of hash power the network must churn out just to arrive at a legitimate solution (correct nonce value), auditing the correctness of this derived solution may take only a few milliseconds on an archaic 32-bit processor (allowing for a widespread, indisputable auditing of proposed blocks by nodes on the network to ensure that a correct consensus by the majority will be reached within a timeframe bounded by the network's latency, assuming that all nodes respect the protocol's ruleset [i.e., the chain with the greatest Proof of Work must be considered the one, true valid chain]).

Argument Two: "ASIC miners are to be loathed because they ultimately represent a reality in which only very large-scale, multi-national corporate conglomerates with estimated valuations in excess of a billion dollar are able to get a 'slice of the pie'"

Response / Rebuttal: It is admirable that there are those that wish to promote greater equity in the world / blockchain ecosystem. Not necessarily for the merits of the idea itself, but for the intentions that are behind it (to provide a greater benefit for a greater number of individuals / participants). Unfortunately, the inherent structure of the Proof of Work process makes this egalitarian vision an impossibility.

As we saw with the ProgPoW Ethereum debate in 2019, 'bricking ASIC miners' only changes the playing field upon which these 'large monopolistic mining entities' will be playing upon. In other words, whle it may seem that bricking Bitmain, Innosilicon, and other ASIC manufacturer / mining companies in favor of GPU mining is the ultimate solution (and indeed there are several blockchain projects that are actively rotating their mining consensus algorithms on a scheduled basis to accomplish this very end), the net result will be no change.

To explain, Ethereum's mining ecosystem, which is GPU dominated at the time of writing, presents no greater opportunity for entry by the 'average Joe' than Bitcoin. In fact, one could argue that the barriers to entry for mining on Ethereum are elevated in comparison to Bitcoin. This is due to the fact that there are only two companies that possess the requisite economies of scale to produce a competitive GPU product that prospective miners would consider investing funds in to start up or expand their already existing mining operations. Those two GPU companies are NVIDIA and AMD. Both entities are million dollar companies that likely cannot be usurped for market share in the GPU mining ecosystem.

A Different Approach to Mining Ecosystem Design

Rather than developing an ecosystem that seeks to "run away" from one method of mining vs. the other (i.e., ASIC vs. GPU or otherwise), we've made the conscious decision to target a specific type of mining hardware.

FPGAs.

Why FPGAs?

Their flexibility as well as their computational prowess.

Unlike ASIC chips, FPGAs are not singular-purposed. In fact, one could consider FPGAs to be the polar opposite of ASIC chips - which means that they're incredibly flexible.

By 'flexible', we're referring to the fact that FPGAs can be purposed for a near infinite number of 'use cases' or tasks.

Quick Breakdown of How ASIC Chips Work

By nature, chips are 'computers' (CPUs). The computer / device that you're using to read this has a processor (more than likely). Perhaps that processor is 'ARM' (for microdevices), 'Apple A14' (for iOS / iPhone), Qualcomm Snapdragon (Android), Intel / AMD (laptop computer) or something else if you're an exotic individual that thrives on representing outlier use cases that don't fit squarely within assumption models like this one.

The purpose of that processor is to compute 'instructions'. Hence the name CPU (central processing unit).

These 'instructions' are actually really basic "building blocks".

In most cases, breaking down how this works would require such an excessive dissection of the fabrics of computing (way outside of the scope of this write-up), that it would be bypassed entirely for fear of losing the reader in minute details that are only peripheral to the main point being made.

However, we have the good fortune of being Bitcoin users. Thus, we have an immediately applicable example that we can draw from for reference.

Bitcoin Opcodes as a Parallel

Each processor (whether an ASIC or otherwise) comes with its own instruction set (or its not a functioning chip). That means that there is some set of instructions / operations that are performed whenever an 'opcode' is signaled.

Per Wikipedia (they're actually a really reliable source on this):

"On traditional architectures, an instruction includes an opcode that specifies the operation to perform, such as add contents of memory to register - and zero or more operand specifiers, which may specify registers, memory locations, or literal data."

The biggest takeaway from what was written above was that the 'opcodes' that are used for processors could be considered analogous to the 'opcodes' that accompany each Bitcoin transaction.

Specifically, referring to opcodes such as the one below:

OP_DUP OP_HASH160 <pubkeyHash> OP_EQUALVERIFY OP_CHECKSIG

*the opcode above means that the input should be duplicated, then hashed with SHA256 followed by a ripemd160 of that SHA output; upon completion, that result will be "cached" in the stack, so to speak, with the "" (whatever that is), being pushed on top of that sha256(ripemd[initial_input]) that we computed earlier - once that's done, the


 will ensure that the two results are equal or push false to the stack / assuming that true is pushed to the stack, the accompanying signature that we provided for the transaction (which must've been generated with the private key associated with the public key for which the public key hash derives from), must match the public key that we have given [that's the

 ```OP_CHECKSIG```

 command]

As eloquently explained by 'The Publius Letters' in a piece titled, 'Segregated Witness: A Fork Too Far', the the input process (i.e., the input that must be contributed by the user that's looking to spend the transaction in question that possesses that transactions script) should be as follows:

"When one wants to spend this output in a transaction, they need to provide a and , found within the input’s signature script (Figure 6A-B), which will satisfy the pubkey script requirements. Figure 7 depicts how this is evaluated on the stack when a node attempts to verify a transaction."

The piece provides an excellent accompanying visual for the process as well (republished below for convenience sake):

Each operation being enacted by the little 'opcodes' that we can see in the gif above represent operations.

In total, there are several dozen potential 'opcodes' that can be called for Bitcoin for inclusion in a transaction (to be more accurate / correct, these opcodes are used in the formulation of a given address and thus, end up ultimately representing the imposed conditions to spend the UTXO that the owner / creator of said address wishes to be fulfilled for any and all funds that are sent to that specific address)

Below are some examples of opcodes that are enabled for use on the Bitcoin protocol:

notably some of the arithemtic-based opcodes have been disabled in the protocol; a few for good reason, others out of what may amount to pure paranoia

Each one of those operations shown in the screenshots above can be considered to be an "instruction" on a processor, if that makes sense.

Conceptualizing ASICs vs FPGAs

To put it simply, an equivalent for the ASIC chip here in this context would be one that could only generate (and run) the specific transaction that we showed earlier (as a model).

Here it is again below (in case you missed it):

OP_DUP OP_HASH160 <pubkeyHash> OP_EQUALVERIFY OP_CHECKSIG

That's it.

And those opcodes must be used in that order or not at all.

Have any other ideas, designs, goals, dreams, uses for the chip? Too bad. This is all that its ever going to be able to do.

If you're interested in doing anything else, then consider either abandoning those dreams or simply purchasing / manufacturing a new ASIC chip.

FPGAs Are Akin to the Entire Opcode Set

The full playground. Without restriction. Whatever transaction your natural creativity can manage to conjure up out of pure creativity is eligible to be used.

There are obviously presets / defaults / examples out there for you to choose from, but by no means should you consider those to be limitations but rather suggestions.

Why We're Targeting FPGA Design in the Creation of Foobar Protocol

Because we're looking to amend the Proof of Work function (ever so slightly).

Since the purpose of this blockchain is not to secure financial / monetary value (as an explicit and sole 'end goal'), we are freed from the burden of worrying about double spend transactions.

The data being confirmed on this chain is the existence of a valid anchor (on one of the supported blockchains) as well as the simultaneous creation of a human-readable identification (URI; must be in a very specific format in accordance with the stipulations of the IETF + ISRG.

There is no way to 'double-spend' this event. However, that does not let us 'off the hook' when it comes to verification and validation.

Thus, there will be conventional mining that takes place (i.e., the quest for the correct 'nonce' by the network) - but there will be a key divergence between this protocol's mining implementation and Bitcoin (other known Proof of Work blockchains).

The only reward that miners stand to receive are the fees paid by those that wish to stamp URIs on the blockchain (and subsequently upload content to the network as well).

There willl be a separate set of 'miners' on the network that will fulfill the role of content-key hash lookups (if one is familiar with the IPFS or other distributed file storage systems, then this idea should feel extremely 'familiar').

The concrete specifications for how the rewards will be distributed (as well as how many and how frequently along with all other relevant attributes) will be shelled out in another section that's specifically dedicated to addressing that facet of the protocol. For now we're just looking at the general design of these "miners".

Explaining the Role of These Data Miners

Rather than mining for a specific nonce value, data will be mined (in the form of lookups on the network for requested content).

These nodes have the option of also participating in the actual storage of data on the network (if they so choose), but those nodes (storage nodes) will be compensated in a manner entirely separate from the data mining nodes on the network (staying on track here, there are three different types of executive roles that nodes can perform on the network ; there is no exclusivity required in any of these roles, but for the sake of resource consideration by any one given entity it is critical that one assess their likely capabilities before reflecting the ability to make such a commitment to the network as there are penalties for those that fail to follow through on their storage obligation)

Data Mining Nodes and FPGAs: An Introduction to Caribou2

Never heard of it before?

No problem.

Behold - Caribou: An Intelligent Distributed Storage (presented by the fellows over at ETH Zurich! What great fortune we have to be surrounded by such innovative colleagues such as those)

We will not be using this paper as our blueprint for the creation of the distributed data architecture for this protocol, but many of the concepts that are outlined within the whitepaper (specifically as they pertain to the functionality and usage of FPGAs to enhance the network's necessary operations), will be referenced as an example for how this protocol's backbone distributed data architecture will be designed to provide a similarly strong incentive for FPGA use to fulfill this purpose.

Starting From the Abstract

Without irony, the 'Abstract' does present itself as the best starting for defending our hypothesis that a uniquely useful marriage between FPGA deployment and distributed data storage (within the context of latter data mining in the form of content keyed lookups) can be facilitated with relative ease on top of a DHT, IPFS, i2p, or similarly-routed, distributed and decentralized overlay protocol architecture.

Specifically, the Abstract of the paper states:

"The ever increasing amount of data being handled in data centers causes an intrinsic inefficiency: moving data around is expensive in terms of bandwidth, latency, and power consumption, especially given the low computational complexity of many database operations."

Okay, we can agree with this so far.

"In this paper, we explore near-data processing in database engines, i.e., the option of offloading part of the computation directly to the storage nodes. We implement our ideas in Caribou, an intelligent distributed storage layer incorporating many of the lessons learned while building systems with specialized hardware. Caribou provides access to DRAM / NVRAM storage over network through a simple key-value store interface, with each storage node providing high-bandwidth near-data processing at line rate an fault tolerance through replication. The result is a highly efficient, distributed, intelligent data storage that can be used both to both boost performance and reduce power consumption and real estate usage in the data center thanks to the microserver architecture adopted."

Top comments (0)