Last year, I began digging seriously into trusted execution systems (TEEs) as a foundation for blockchain security systems and as a privacy-preserving technique. The findings convinced me of the viability of TEEs. First, I will briefly recap my reasoning and then share fresh evidence that bolsters my belief in verified TEEs.
Privacy scorecard: TEEs and others
For a long time, thanks to the penchant for zero-knowledge proofs (ZKPs) by Ethereum and other L2s, TEEs remained in the periphery. With other privacy solutions as options in the mix, too, TEEs captured the attention of very few.
This perception has, gradually but surely, been changing with research demonstrating that TEEs play a crucial role as the optimal infrastructure in building the next-gen web3 and AI.
The success and popularity of the five editions of the Afternoon TEE Party - Devcon Bangkok in late 2024, and ETHDenver, Token2049 Dubai, EthCC Cannes, and Devconnect Buenos Aires across 2025 also are testament to how TEEs are finding recognition and acceptance steadily.
Remote Attestation: Integral TEE Component
How TEEs function is as simple as the image above. But verifiability is always essential, and this is where remote attestations come in. Attestation in tandem with reproducible builds critically enhances the integrity and trust for TEEs. So, you know that software built from the same source code always produces identical binaries.
We all know virtual machines (VMs) and cryptography are the backbone of blockchain technology, and the crux of the matter is that protocols need remote attestation to mitigate vulnerabilities. Oasis Foundation Director, Jernej Kos, did a deep dive technical analysis on the remote attestation process.
Attestation Is Not Enough
This is new research and consolidates how TEEs stand the test of being a prominent player in blockchain security and privacy. It underlines how attestation needs to be fortified with other aspects to make TEEs truly formidable.
What we get from TEEs are facts, and they are undisputed.
- Privately run code with all fully-encrypted off-chip state gives isolated execution.
- Cryptographic keys built in the CPU itself, used to encrypt data and sign attestation messages gives per-CPU root of trust.
- Proof of a specific binary code running in a specific enclave from remote attestation, as mentioned.
Let's now examine why attestations may fall short of expectations, what the missing pieces are that need to be addressed, and a potential solution.
What Attestation Actually Proves
Now, unless you are a hardware security expert, verifying an SGX/TDX quote is a stiff challenge. It will involve parsing a multi-KB binary blob, extracting fields, fetching collateral, checking FMSPC, interpreting TCB status, validating cert chains, etc. And, all this leaves the entire security model at risk of collapse at any point.
Let's assume that you have managed to execute the whole process without failure, but still, the fact remains that the validation is true only for one moment, and not guaranteed for other times when the verification is not run:
- So, the measurement was correct then
- The hardware TCB was acceptable then
- The quote presented by the operator was then
None of the assumptions in the right column is the default. This means what we usually get are the presentation of raw attestation data and a row of green checkmarks in the name of verification. So, the burden of proof simply rests on the user, and anyone who isn't an expert simply doesn't know any different.
Missing Pieces & Possible Solution
At least half a dozen critical gaps can often be found when taking a deep dive into any TEEs these days that claim to be verified.
Freshness & Liveness: A quote, once validated, is not refreshed automatically, and the old, pre-verified one is used unless a new one is specifically called for.
State Continuity & Anti-Rollback: Only by anchoring the enclave to a live ledger as the timekeeper can it be ascertained that the data is current for the attested code, and not the case where a malicious host simulates live data by restarting an enclave and feeding it an older version of its encrypted state.
TCB governance: As demonstrated by recent security exploits, manufacturers, like Intel in this case, might consider physical attacks (wiretapping, battering ram) out of scope. This calls for stricter threat models for verifiers, where continuous policy checks and additional on-chain measures can plug the trust gap left by outdated or insecure "trusted" CPUs.
Operator Binding: Attestation verifies what is running in the code, but there is no accountability for who is running that code. Binding the hardware’s cryptographic identity to a slashable, on-chain operator identity with economic cost for malicious acts can be the answer here.
Upgrade history: A transparent history must be a prerequisite to guarantee data confidentiality. It basically means that only a current secure version is not enough; there must be a track record of valid attested versions to check the code continuity, bug fixes and all.
Code Provenance: I already mentioned reproducible builds earlier. Without someone being able to independently compile the code and verify that its hash matches the deployed version, attestations are incomplete.
Policy enforcement: Only clearly defined policies and their enforcement can define without ambiguity what verified TEEs mean - which binary should run, which hardware is acceptable, re-attestation frequency, approved locations, etc.
Consensus as Verifier
The discussion so far underlines that the architectural design of the TEEs needs sufficient infrastructure support that addresses the gaps, and does it all automatically. A Byzantine Fault Tolerance (BFT) attestation-verifier network is very handy in this respect.
This takes into consideration that while every client parsing every code all the time might be ideal, that is not feasible. So, what then? The BFT model, where trust in attestation validity is established by the consensus of many. The process would be like this:
-> Stake-bearing, slashable nodes submit enclave attestations and verification evidence
-> A fault-tolerant set of validators collectively verifies hardware TCB, measurements, policies, freshness, etc
-> Consensus agreement on verified identities, operators, and attestation policies becomes the on-chain state
When it is made open for anyone to verifiably query the on-chain state, attestations stop being static and complex, and become usable on-chain signals.
Final words
Oasis is a prime example of the type of BFT attestation-verifier network I have been talking about. The fact is that it's just one example, but the principle applies more generally to all who use TEEs.
So, what makes TEEs truly secure enclaves? When we move away from mere isolated black boxes, and implement processes to get integrated, verifiable components within larger trust systems.
As a matter of fact, all this can go beyond simple attestations checks, and what I described for on-chain can be extended to trusted off-chain applications. But that is a story for another day.





Top comments (0)