This is the age of AI, and I am here to do another examination of decentralized AI (DeAI). Today, in this 2-part series, I will look into the way trusted execution environments (TEEs) can power verifiable privacy. For my inputs, I will use references from Oasis Labs and Oasis Protocol.
Oasis has been developing AI frameworks as an extension of its privacy-first philosophy and expertise long before the cryptoAI landscape gained prominence. In an earlier article, I had briefly discussed adopting the decentralized approach in working with LLMs as a potential game-changer over the traditional, centralized setup.
One of the most critical advantages of DeAI is the ability to provide provenance for the AI models. As a result, we gain valuable insights that espouse the blockchain principles of transparency and immutability:
- Source, as in which pre-trained foundation model is used
- Method, as in what additional training steps are used to specialize the model
- Content, as in what training data has been used
Oasis posits that GPU-enabled trusted execution environments (TEEs), combined with its self-developed runtime off-chain logic (ROFL) framework, can help build sustainable, specialized AI models with verifiable provenance information published on-chain. While the runtime on-chain logic is handled by Oasis Sapphire's utilization of TEEs powered by Intel SGX v1, Oasis ROFL leverages confidential computing support with TDX-powered TEEs that can perform GPU-accelerated ML training and inference tasks in an integrity-protected, attestable environment.
This clears the path to introduce decentralized marketplaces for AI models and services, with built-in openness, transparency, and community governance, and USPs such as confidentiality of sensitive data, freedom from bias, and fair compensation for use of data and models.
In the concluding part of the series, we will illustrate a proof of concept by demonstrating step by step how to use off-chain GPU-enabled TEEs and ROFL to create specialized AI models.
Resources to explore further:


Top comments (5)
The provenance angle is honestly one of the most underrated benefits of DeAI.
Being able to trace what model was used, how it was trained, and what data shaped it, all without exposing any of that raw info is huge. TEEs & ROFL make that actually doable, especially with GPU support for real training/inference.
If decentralized AI is going to be taken seriously, this mix of verifiability and privacy is exactly what’s needed. Looking forward to part 2.
this is a super strong primer on why TEEs (Trusted Execution Environments) + the Oasis network are really powerful together! It’s one of the clearest ways to do verifiable, privacy-preserving AI on-chain.
Great introduction to how TEEs can underpin decentralized, verifiable AI systems.
I especially liked the breakdown of provenance-data (source model → method → content) and how the Oasis Labs / Oasis Sapphire combination uses GPU-enabled TEEs + on-chain logic to bring both privacy and trust to model training/inference.
Great breakdown, the focus on verifiable provenance is spot on. Oasis’s combo of Sapphire TEEs for on chain logic and ROFL’s GPU enabled offchain TEEs creates a strong foundation for decentralized AI that’s both private and provably correct.
TEEs here aren’t just about confidentiality, they enable trustless model creation, transparent provenance, and future decentralized marketplaces where data and model creators are fairly compensated.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.