DEV Community

Tangle Network
Tangle Network

Posted on • Originally published at tangle.tools

How Tangle Verifies Work

How Tangle Verifies Work

Day 3 of the Tangle Re-Introduction Series


The hardest question in decentralized infrastructure isn't "how do we run computation" but "how do we know computation ran correctly."

This post covers what each verification mechanism actually proves, where it breaks down, and how Tangle lets developers wire it all together.

Why Not Just Use AWS?

Cloud providers cannot cryptographically prove correct execution or slash misbehaving operators.

AWS, Google Cloud, and Azure have decades of production hardening, legal accountability, and compliance certifications. For most applications, they're the right choice.

But traditional infrastructure has structural limitations:

Observation risk. Cloud providers have root access to your workloads. Insider threats aren't hypothetical; they're a documented category of breach. For workloads where the data itself is enormously valuable (trading strategies, proprietary algorithms), the risk calculus changes.

Jurisdictional exposure. Cloud providers must comply with legal demands from every government where they operate. Distributed infrastructure across jurisdictions provides options that concentration cannot.

Verification gap. Traditional providers give you audit logs they control. For high-stakes computation, you're trusting reputation rather than cryptography.

Speed of recourse. Contract enforcement takes months. For autonomous systems operating at machine speed, economic enforcement that settles in seconds is the only enforcement that matters.

Where Tangle Fits

Tangle is a verification layer where developers choose and compose verification per blueprint.

Tangle is the general-purpose layer where developers choose and configure verification for each blueprint and service they deploy. We don't prescribe a single verification mechanism. We give you the primitives and let you compose them.

A blueprint that manages signing keys needs MPC. A blueprint running private inference needs TEEs. A blueprint doing deterministic computation can use redundant execution with on-chain result comparison. Tangle supports all of these because different workloads have fundamentally different trust requirements.

Blueprint SDK is a Rust-based framework for building verifiable AI services with TEE attestation, and it exposes this as a routing and aggregation model. You define job functions, wire them into a router, and configure how many operators need to agree before a result is accepted:

use blueprint_sdk::tangle::TangleLayer;
use blueprint_sdk::{Job, Router};

// Job constants control aggregation requirements
pub const XSQUARE_JOB_ID: u8 = 0;        // Single operator
pub const VERIFIED_JOB_ID: u8 = 1;        // 2 operators must agree
pub const CONSENSUS_JOB_ID: u8 = 2;       // 3 operators (BFT quorum)

pub fn router() -> Router {
    Router::new()
        .route(XSQUARE_JOB_ID, square.layer(TangleLayer))
        .route(VERIFIED_JOB_ID, verified_square.layer(TangleLayer))
        .route(CONSENSUS_JOB_ID, consensus_square.layer(TangleLayer))
}
Enter fullscreen mode Exit fullscreen mode

Each route maps a job ID to a handler function wrapped in TangleLayer, which handles ABI encoding/decoding and result submission automatically. The aggregation count (how many operators must submit matching results) is configured per job at the contract level. Same job logic, different verification guarantees.

The runner itself is straightforward:

BlueprintRunner::builder(tangle_config, env)
    .router(router())
    .producer(TangleProducer::new(client.clone(), service_id))
    .consumer(TangleConsumer::new(client.clone()))
    .run()
    .await
Enter fullscreen mode Exit fullscreen mode

Producer listens for on-chain job submissions. Consumer posts results back. Everything in between is your verification logic.

The Verification Problem

Verification proves the right code ran on the right inputs and returned the real output.

When you pay someone to run computation, you're trusting that they ran the code you specified, on your inputs, and returned the real output. Each verification mechanism proves some of these properties under specific assumptions. None cover everything.

Trusted Execution Environments (TEEs)

TEEs are hardware enclaves that prove code ran correctly while keeping data hidden from operators.

TEEs are hardware enclaves that isolate code execution from the rest of the system, including the machine's operator. Intel SGX, AMD SEV-SNP, and ARM TrustZone create isolated memory regions with hardware-enforced encryption. When a TEE boots, it generates an attestation: a cryptographic proof signed by the hardware manufacturer stating what code is running. In practice, TEE attestation adds less than 5ms latency per inference call, making it viable even for latency-sensitive AI workloads.

What TEEs prove: Specific code is running (via attestation). The operator cannot observe computation (hardware-enforced memory encryption). Inputs and outputs have integrity.

What TEEs don't prove: Code correctness (attestation proves "this code ran," not "this code does what you want"). Side-channel resistance (timing and cache attacks can leak information; practical key extraction from SGX has been demonstrated). Hardware trust (if Intel or AMD are compromised or coerced, attestations become unreliable). Replay resistance (attestations need nonce-based freshness to prevent operators presenting old valid attestations for new requests).

TEEs are appropriate when confidentiality matters more than eliminating all trust, and when side-channel risk is acceptable for your threat model.

Redundant Execution

Multi-operator verification requires multiple independent operators to agree on a result before payment settles, catching any single dishonest party.

The simplest verification: have multiple independent parties run the same computation and compare results. N operators execute each job independently. If results match, the job succeeds. Disagreement triggers dispute resolution.

What it proves: At least one honest operator computed correctly (if results match). Collusion requires controlling multiple independent operators. Disagreement is always detectable.

What it doesn't prove: Correctness for non-deterministic computation. Which party is correct during disputes. And it's expensive: running computation 3x costs 3x.

Optimistic Verification with Fraud Proofs

Optimistic verification assumes results are correct and lets anyone challenge during a window, requiring only one honest verifier to catch fraud.

Assume execution is correct, but allow challenges. One operator executes and commits to the result. During a challenge window, anyone can dispute by posting a bond. An interactive bisection protocol identifies the exact divergent instruction. The faulty party loses their bond. This is how Arbitrum and Optimism work. The fraud window for optimistic verification is typically 7 days, while Tangle's multi-operator consensus settles in seconds -- a critical difference for real-time AI services.

What it proves: Incorrect execution is eventually detectable and punishable. A single honest verifier is sufficient. Happy path requires only one execution.

What it doesn't prove: Real-time correctness (challenge window delay). Liveness (if no one monitors, fraud goes undetected). Non-deterministic computation (requires deterministic replay).

MPC and ZK Proofs

MPC splits computation across parties without revealing inputs; ZK proofs verify correctness without revealing data.

MPC splits data into shares distributed across parties so they can jointly compute without revealing inputs to each other. No single party learns the inputs (if the corruption threshold holds), but it adds significant overhead and is practical only for high-value computations like key management and threshold signatures.

ZK proofs let a prover demonstrate computation correctness without revealing inputs. Verification is trustless and cryptographically sound. The catch: ZK proof generation costs 10-100x more compute than the original operation, and some ZK systems (Groth16, older PLONK variants) require a trusted setup ceremony. Transparent alternatives (STARKs) avoid trusted setup but produce larger proofs. For ML inference, ZK overhead is currently 10,000x to 100,000x slower than direct computation, though projects like EZKL are pushing this forward.

AI Inference Verification

The primary attack on AI services is model substitution -- running a cheaper model while billing for an expensive one.

Neural network inference is mathematically deterministic. The apparent non-determinism comes from temperature sampling, floating-point non-associativity across hardware, and library optimizations like cuDNN algorithm selection. With effort (temperature=0, fixed seeds, deterministic CUDA flags, identical hardware), you can achieve reproducible inference. The non-determinism is a practical constraint, not a fundamental one.

The primary attack vector for AI services is model substitution: claiming to run an expensive model while actually running a cheaper one. The following are general techniques applicable to AI verification, not Tangle SDK features today. Blueprint developers can implement these as application logic within their services:

Weight hash verification. Hash the model at load time, include the hash in TEE attestation. Verifies model identity at the hardware level.

Challenge prompts (canaries). Specific prompts with known expected outputs. Different models have distinct response patterns. Caveat: sophisticated operators could detect canary patterns and route only those to the real model.

Latency fingerprinting. Different models have characteristic timing profiles. Statistical analysis of response times can detect substitution.

Token probability extraction. For models that expose logprobs, the probability distribution over tokens is a fingerprint.

What remains unsolved: verifying output quality has no cryptographic solution. Detecting subtle degradation (aggressive quantization, caching) requires ongoing monitoring, not one-time verification.

Economics as Backstop

When detection probability times slash amount exceeds the profit from cheating, rational operators will not cheat.

Every verification mechanism has assumptions that can fail. Economics provides the backstop.

The security equation: if P(detection) x slash_amount > profit_from_cheating, rational operators don't cheat.

A worked example: a service processes inference jobs worth $100 each. Operators stake $50,000. Detection mechanisms (TEE attestation, canary prompts, consistency checking) catch cheating 80% of the time within one week. Cheating on 100 jobs might net $5,000 in cost savings.

  • Expected value of cheating: $5,000 x 20% = $1,000 (if undetected)
  • Expected cost of cheating: $50,000 x 80% = $40,000 (if detected)

Rational operator: doesn't cheat.

This breaks down for irrational actors (nation-states, ideological attackers), correlated failures (volatile staked assets crashing when you need them most), and gray-area cheating (cutting corners in ways that are hard to detect). Economic security addresses catastrophic misbehavior better than subtle degradation.

How Verification Approaches Compare

Each verification mechanism makes different tradeoffs between speed, cost, generality, and security guarantees:

Approach Tangle Optimistic Rollups ZK Proofs TEE Only
Speed Fast (multi-operator) Slow (7-day window) Slow (proof generation) Fast
Cost Medium (operator fees) Low (amortized) High (prover cost) Low
Generality Any computation EVM only Circuit-compatible Any computation
Economic security Staked collateral Fraud bonds Math guarantees Hardware trust
AI workload support Native Limited Very limited Good

What Tangle Supports Today vs. What's Coming

Tangle ships multi-operator BLS aggregation and staking today; TEE SDK hooks, fraud proofs, and ZK verification are next.

Today Coming
Redundant execution Multi-operator BLS aggregation with configurable quorum (2-of-N, 3-of-N) Optimistic fraud proofs for deterministic jobs
TEE attestation TEE infrastructure support (feature flag) with attestation configuration in CLI First-class TEE attestation hooks in the SDK
Economic security Operator staking and slashing via service contracts Programmable slashing conditions per blueprint
MPC Threshold signature schemes (FROST) via blueprint services Generalized MPC protocol support
ZK verification ZK proof verification in custom job logic Native ZK verifier integration, zkML support
AI-specific Multi-operator consensus for inference verification Canary prompts, model hash attestation, latency fingerprinting modules

What This Means for Builders

Match the verification mechanism to the value you are protecting, and be explicit about which trust assumptions you are accepting.

Every verification mechanism has trust assumptions. TEEs trust hardware manufacturers. ZK may trust setup ceremonies. MPC trusts honest thresholds. The job is matching the right mechanism to the value of what you're protecting, and being explicit about which assumptions you're making.

Where Ritual focuses on on-chain inference and Phala on TEE privacy, Tangle combines cryptoeconomic guarantees with a general-purpose service layer that supports TEEs, MPC, redundant execution, and ZK proofs as composable primitives. Tangle gives you the building blocks. You compose them.

What's Next

The next post covers the developer experience: building blueprints, the SDK, deployment tooling, and the path from idea to production service.

If you're evaluating verification requirements for a specific use case, find us on Discord.

Frequently Asked Questions

How does Tangle verify AI inference?
Tangle uses multi-operator BLS aggregation to require independent operators to agree on results, with TEE attestation support (feature flag) for proving correct model loading, and operator staking as an economic backstop. Canary prompts and latency fingerprinting are on the roadmap.

What is TEE attestation for AI models?
TEE attestation is a cryptographic proof signed by hardware manufacturers (Intel, AMD) confirming that specific code ran inside an isolated enclave the operator cannot observe.

How does operator staking prevent cheating?
Operators lock collateral that is automatically destroyed (slashed) if verification detects misbehavior, making the expected cost of cheating exceed any possible profit.

What are slashing conditions in Tangle?
Slashing conditions are blueprint-defined rules that specify when an operator loses stake: failed verification, missed deadlines, provably malicious behavior, or submitting results that disagree with other operators.

How does Tangle compare to optimistic rollups for verification?
Optimistic rollups (like Arbitrum) use challenge windows and fraud proofs for on-chain state transitions. Tangle applies similar economic security to off-chain services like AI inference, code execution, and signing.

What is model substitution and how does Tangle detect it?
Model substitution is claiming to run an expensive AI model while secretly running a cheaper one. Tangle detects it through multi-operator consensus (disagreement reveals substitution) and TEE hardware attestation. Weight hash verification, canary prompts, and latency fingerprinting are roadmap features.


References:

Links:

Top comments (0)