Hello fellow adventurers.
Today I'd like to get a bit more technical and describe how the hive-adventures will synchronise its data with the Hive blockchain.
The application will read the custom_json_operation
s of the following format:
{
"type": "custom_json_operation",
"value": {
"id": "hive-adventure",
"json": "...",
"required_auths": [
"anonymous"
],
"required_posting_auths": []
}
}
All these operations are stored locally and then presented to the players as HTML.
The id
must be set to hive-adventure
. The json
field must contain the game text in the markdown format I described in the announcement blog post. Either required_auths
or required_posting_auths
must be set to the game's author. Currently only one author is supported.
On startup, the application looks at the largest block number stored in the local database. It then performs synchronisation in three steps:
First, the application enters massive sync mode.
Once started, the first step is to check the current head of the remote we're syncing with. If the remote's head is larger than ours, we synchronise with that remote, 1000 blocks at a time, until our head matches the remote's head.When the massive sync is complete, the application will go into sync mode. It's similar to massive sync, but it synchronises one block at a time. After each block, it checks if the block time is less than 1 minute from the current time. If it is, the sync mode is finished.
After all this the application goes into live mode.
In live mode, the application synchronises with the remote one block at a time, just like in sync mode. This time, however, it sleeps between blocks to wait just long enough for the next block to be produced.
If a delay of 1,000 blocks or more is detected in live mode, the application will go back to massive sync mode. This shouldn't normally happen, but it can for example if the network goes down. This step is an important optimisation to quickly catch up with the current head in such situations.
The only place where an error will cause the application to quit is in the case of massive synchronisation. It's then up to systemd to restart the app after a while, in hope that the problem was temporary.
In all these modes, the application first tries to get the data from the hafah instance, and if that fails, from the witness APIs. Hafah is chosen as the primary data source because it allows to query only specific types of operations. In my case I'm only interested in custom_json_operation
s, so it serves as a slight network IO optimisation to fetch only those operations.
The synchronisation application is already running on my server as described above, but its data is not yet exposed to the public.
Please let me know if you have any thoughts on synchronising with Hive.
Top comments (0)