Earlier days of SSI. Is everything a VC?
It was a boom of self-sovereign identity projects around 2018-2020. All of them had one common bias. Everything was viewed as a Verifiable Credential. So, every single user data point should be represented as a W3C verifiable credential or similar technology. If you have a hammer, everything is a nail that didn't work well.
The majority of user data is not vcfiable . VCfication makes sense for data points with an attestation component that needs to be verifiable by a third party and creates a need for classical trust triangle interaction.
So, we must admit that sovereign data is much broader than a VC. We still need to share user data concerning SSI principles of user-centricity and consent.
Now we need a persistent and application layer that enable this polyglot data setup
Missed persistence layer
Self-sovereign Identity has the bold goal of creating a missed identity layer for the Internet. It is more than one problem of the web. Internet was designed on top of stateless protocols, and all these protocols miss decentralized persistence. So even for Verifiable Credentials and Identity data like DID documents, you need persistence, and to satisfy the SSI idea, you probably need it in a decentralized manner.
IPFS is not a rescue
IPFS was a pioneer of decentralized data storage outside of blockchain. IPFS focuses on self-addressable data, shapes a decentralized application, and enables a lot of Dapps. IPFS struggle on a few areas:
- privacy : all data is open and accessible for everyone, like most blockchains. So to be private only one way is a full encryption better if you use a post-quantum driven encryption as far as IPFS has a promise of immutable data storage where data stay forever
- Incentives - the way much bigger problem is how to convince node maintainers to store your data and replicate your data ? If you have no nodes - you data is gone. Blockchain have implicit economic model
- heavy - it is almost impossible to run ipfs on local device mode. > For a general IPFS system, not accounting for specific workload requirements or use cases, high core count processors and a minimum of 32GB of memory is recommended. A tiered storage system using NVMe, SSD, and HDD devices is ideal for data storage.
- global is designed as a decentralized global storage
- peer-to-peer nature forces users to install and use a client for data access, limiting adoption and setting a relatively high technical barrier.
Ceramic like services
Ceramic take IPFS to the next level and for sure fix privacy concern with lit protocol and solve Incentive problem in some way. However, the network is private, permission-based, and controlled by a private company. So we still not there
What kind of storage do we need for sovereign data?
So, what does an excellent persistent layer for sovereign data look like?
- Content polyglot layer: To empower and enable wallets, it should be capable of storing VCs and other structured data like files, blobs, and other media.
- local first and offline first - The user should be able to self-host and own data in a perfect setup system that should be capable of running on user hardware
- device and mobile friendly - continuation of previous requirement. more and more users are mobile-first, or even more often, we have users that use mainly mobile
- extendable in a decentralized manner It is a question of time when the capacity of a mobile device will not be enough to handle all the data that the user wants to take with him. We wish to extend our local first model with decentralized and online capabilities
- Secure - we want edge encryption and optionally synchronize data. The user should decide what data to share and replicate with the decentralized node and other network users. Better to have client-side end-to-end encryption
- Interoperable and open storage should work over open standard data protocol and transparent and user-centric protocol-based data exchange
- reactive and proactive on a decentralized and data-intensive exchange, it is essential to have a more reactive and streaming approach to get data changes and keep devices in sync
- findable According to FAIR principles, data should not only be accessible and reusable but also friendly to discovery and recall
- queryable it should be a way to implement intelligent queries across structured data
How does DWN fit into the game?
- Content polyglot layer DWN suitable to store structures and binary data and potential files. The system expects self explainable data as far as you provide a data schema together with a data, and schema is one of the keys
- local first and offline first DWN reference implementation built on top of LevelDB and SQL-based setup is possible. User could use DWN server or use an app that has embedded DWN built with DWN SDK
- extendable in a decentralized manner DWN is built as a Relay based decentralized solution that is the sweet spot between a centralized server and peer-to-peer setup. On top of Relay, you can construct flexible topologies from self-hosted solo servers and in-memory or even browser-based agents to a full-scale sync data mesh system
- Secure DWN offers DID Auth layer and full-scale permission and data access management out of the box. End to end client side edge encryption is one of the out-of-the-box
- Interoperable and open DWN is an open protocol under the DIF foundation umbrella. DWN is fully open source and driven by a proactive and open community. DWN encourages a protocol and data protocol-driven development
- reactive and proactive DWN has a sync mechanism as the propagation mechanism currently on active development. DWN server offered socketcket interface. A pretty prominent feature is Hooks, which offer transport-agnostic subscriptions to data changes and event
- findable Currently Discovery of DWNs heavily depends on DID document and quite similar top a DIDComm v2 mechanism's
- - queryable DWN is quite limited of queries and currently require external indexing and query solutions
Top comments (0)