DEV Community

Cover image for The most important project in the world: Overview
 Mat Kwa
Mat Kwa

Posted on

The most important project in the world: Overview

Okay, I admit my last article didn't have a lot of project substance. I'm going to make up for it this time. What on earth am I on about with such a grandiloquent headline?

Decentral Network
Photo by Shubham Dhage on Unsplash

The best short answer I can think of is a decentralised database. Not a new type of DBMS...we are using Postgres, and theoretically any other SQL database should work as well... but a consensus layer on top of the DBMS, which manages write access and thereby bypassing the bottleneck for scaling databases... conflict resolution.

buffaloes in conflict
Photo by Richard Lee on Unsplash

I am not going to go into detail of database locks and concurrency here, else some DB engineer is going to have a field day at my cost. However, it is common knowledge that SQL databases suffer from scalability issues due to the way conflicting operations are being managed. If two users concurrently want to manipulate the same recordset (among other conflicting constellations) the DBMS needs to consistently provide a result and for that the records need to be locked (among other solutions) and that is only possible within limits, hence a database server can only handle hundreds to thousands of concurrent collaborators.

a clock
Photo by Thomas Bormans on Unsplash

Now picture this conflict being transferred into a separate layer, which only exists for this explicit purpose and you are pretty much there. It doesn't make sense that an additional layer could perform better than the DBMS? Well, here comes the catch. We aren't going to do this real time. 24 hours... for the whole world to see... is how long the actual execution of a write access database operation is going to take. There is more than one reason for that, but fully digest this first, it's only the write access that takes so long.

many books
Photo by Artem Maltsev on Unsplash

Have you ever contributed to Wikipedia? Me neither, but I surely read the hell out of it. What I'm trying to express (besides my gratitude for the existence of Wikipedia and its contributors) is that a big enough collection of data is valuable even if you only have read access. Additionally, although that is a bit hard to grasp without the solution at hand, the write operations which are currently under scrutiny in our consensus layer can be privately executed at any point in time. If you are confident a record update, creation or deletion will make it, you can locally execute it ahead of time.

apebase coming soon

Wikipedia is a good example for another reason, it's somewhat the unstructured equivalent of the apebase. There you have its called the apebase. It's a humble nod to to the web3 in general and... Cesar... Central Authority Go!!!

Where were we? Wikipedia right. A huge collection of more or less unstructured data at its core, blobs of text. MediaWiki makes this as structured as it gets, but a Wiki page simply doesn't have the rigid structure of a relational SQL table and especially the consensus mechanism isn't as structured as that of the apebase... and eventually centralised:

Media Wiki info
From the MediaWiki Website

The decentralisation aspect of the apebase now is the crux. Our buck don't stop for nobody. With our consensus layer we don't only achieve scaling on social media level but a total democratisation of data. Every member of the apebase can vote on every write access operation... given they have at least one Platoken left. Yeah, well, its a web3 project so you better expect tokenization. But unlike any other fungible crypto network I am aware of, our tokens don't get minted out of thin air. It's the actual database transactions that are the tokens.

Say whaaat? Well... for that... my friend and avid reader you will have to wait till the next article. Please don't forget to follow me and I could do with a like.

Cheers,

Mat Kwa

Top comments (0)