<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tangle Network</title>
    <description>The latest articles on DEV Community by Tangle Network (@tangle_network).</description>
    <link>https://dev.to/tangle_network</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tangle_network"/>
    <language>en</language>
    <item>
      <title>Remote Providers, Direct Runtimes, and Where Payment-Native Ingress Belongs in Deployment Architecture</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Mon, 30 Mar 2026 21:49:49 +0000</pubDate>
      <link>https://dev.to/tangle_network/remote-providers-direct-runtimes-and-where-payment-native-ingress-belongs-in-deployment-5gj3</link>
      <guid>https://dev.to/tangle_network/remote-providers-direct-runtimes-and-where-payment-native-ingress-belongs-in-deployment-5gj3</guid>
      <description>&lt;h2&gt;
  
  
  The Architectural Decision Most Guides Skip
&lt;/h2&gt;

&lt;p&gt;Most infrastructure guides treat deployment and payment as separate concerns: pick a deployment target, then bolt on monetization later. In Blueprint, that separation doesn't exist at the layer where it matters. &lt;code&gt;X402Gateway&lt;/code&gt; implements &lt;code&gt;BackgroundService&lt;/code&gt; — it runs concurrently with the job runner, not in front of it, not as a proxy, not as middleware. That single architectural choice changes how you compose payment-gated services across every deployment topology.&lt;/p&gt;

&lt;p&gt;Before mapping the decision tree, it's worth understanding why this matters operationally.&lt;/p&gt;

&lt;p&gt;A conventional payment gateway sits in front of compute: request → payment check → compute. The payment layer can only see requests that arrive through it. When it's down, no compute runs. When you want to add a second ingress — say, on-chain Tangle events alongside HTTP payments — you need a mux layer the gateway doesn't natively provide.&lt;/p&gt;

&lt;p&gt;Blueprint's model inverts this. The runner is the mux. &lt;code&gt;TangleProducer&lt;/code&gt;, &lt;code&gt;X402Producer&lt;/code&gt;, and any other producers you wire in are concurrent streams feeding the same &lt;code&gt;Router&lt;/code&gt;. Each producer is independent. The x402 payment HTTP server runs as a &lt;code&gt;BackgroundService&lt;/code&gt; alongside heartbeats, metrics servers, and TEE auth services — it's structurally identical to those, just one more concurrent task in the runner's lifecycle.&lt;/p&gt;

&lt;p&gt;This post covers what that looks like at each deployment layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  DeploymentTarget: What Each Variant Means Operationally
&lt;/h2&gt;

&lt;p&gt;Blueprint's remote provider crate models deployment topology as an enum. From &lt;code&gt;crates/blueprint-remote-providers/src/core/deployment_target.rs&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="n"&gt;DeploymentTarget&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="cd"&gt;/// Deploy to virtual machines via SSH + Docker/Podman&lt;/span&gt;
    &lt;span class="n"&gt;VirtualMachine&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ContainerRuntime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;

    &lt;span class="cd"&gt;/// Deploy to managed Kubernetes service&lt;/span&gt;
    &lt;span class="n"&gt;ManagedKubernetes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;cluster_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;

    &lt;span class="cd"&gt;/// Deploy to existing generic Kubernetes cluster&lt;/span&gt;
    &lt;span class="n"&gt;GenericKubernetes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;

    &lt;span class="cd"&gt;/// Deploy to serverless container platform&lt;/span&gt;
    &lt;span class="n"&gt;Serverless&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;collections&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;HashMap&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="n"&gt;ContainerRuntime&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Docker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Podman&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Containerd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two helper methods on &lt;code&gt;DeploymentTarget&lt;/code&gt; make the operational distinctions precise:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;requires_vm_provisioning&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;matches!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;VirtualMachine&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;..&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;uses_kubernetes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;matches!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ManagedKubernetes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;..&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;GenericKubernetes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;..&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;VirtualMachine&lt;/code&gt; is the only target that triggers SSH provisioning. Both Kubernetes variants get kubectl-based deployment. &lt;code&gt;Serverless&lt;/code&gt; is a platform-specific escape hatch backed by a &lt;code&gt;HashMap&amp;lt;String, String&amp;gt;&lt;/code&gt; for whatever the target platform requires.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;DeploymentConfig&lt;/code&gt; builder methods make the intended usage explicit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// VM: provision a Hetzner node, SSH in, run Docker&lt;/span&gt;
&lt;span class="nn"&gt;DeploymentConfig&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;CloudProvider&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;BareMetal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;vec!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"95.216.8.253"&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;()]),&lt;/span&gt; &lt;span class="s"&gt;"eu-central"&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nn"&gt;ContainerRuntime&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Docker&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;// Managed K8s: EKS cluster, blueprint-ns namespace&lt;/span&gt;
&lt;span class="nn"&gt;DeploymentConfig&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;managed_k8s&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;CloudProvider&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;AWS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"us-east-1"&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="s"&gt;"my-eks-cluster"&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="s"&gt;"blueprint-ns"&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="c1"&gt;// Generic K8s: existing cluster, optional context switch&lt;/span&gt;
&lt;span class="nn"&gt;DeploymentConfig&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;generic_k8s&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"staging-context"&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt; &lt;span class="s"&gt;"blueprint-remote"&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CloudProvider: Local vs Remote Boundary
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;CloudProvider&lt;/code&gt; is defined in &lt;code&gt;crates/pricing-engine/src/types.rs&lt;/code&gt; and re-exported throughout the remote provider crate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="n"&gt;CloudProvider&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;AWS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;GCP&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Azure&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;DigitalOcean&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Vultr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Linode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Generic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;DockerLocal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nf"&gt;DockerRemote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nf"&gt;BareMetal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Vec&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;DockerLocal&lt;/code&gt; is the development boundary. When this provider is selected, the runner executes containers on the host directly — no provisioning, no TLS tunnel, no remote endpoint registration. It's what you use during local development and integration testing.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;requires_tunnel&lt;/code&gt; extension method (from &lt;code&gt;crates/blueprint-remote-providers/src/core/remote.rs&lt;/code&gt;) makes the network topology consequence explicit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;requires_tunnel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;matches!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nn"&gt;CloudProvider&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Generic&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="nn"&gt;CloudProvider&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;BareMetal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="nn"&gt;CloudProvider&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;DockerLocal&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Generic&lt;/code&gt;, &lt;code&gt;BareMetal&lt;/code&gt;, and &lt;code&gt;DockerLocal&lt;/code&gt; require a tunnel for private networking — they don't have managed load balancers. AWS, GCP, Azure, DigitalOcean, Vultr, and Linode get &lt;code&gt;LoadBalancer&lt;/code&gt; or &lt;code&gt;ClusterIP&lt;/code&gt; service types from the managed Kubernetes provider, so they don't need one.&lt;/p&gt;

&lt;p&gt;For &lt;code&gt;CloudConfig&lt;/code&gt; — the top-level credentials structure — provider credentials are loaded from environment variables with priority ordering. From &lt;code&gt;crates/blueprint-remote-providers/src/config.rs&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;CloudConfig&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;enabled&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;AwsConfig&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c1"&gt;// priority 10&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;gcp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;GcpConfig&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c1"&gt;// priority 8&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;azure&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;AzureConfig&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// priority 7&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;digital_ocean&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;DigitalOceanConfig&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// priority 5&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;vultr&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;VultrConfig&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// priority 3&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Priority is embedded in each provider config. When multiple providers are configured, higher-priority providers are preferred for deployment decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  SecureBridge: mTLS for Remote Execution
&lt;/h2&gt;

&lt;p&gt;When a Blueprint job runs on a remote VM or remote Docker host, the manager needs a secure authenticated tunnel to that instance. That's &lt;code&gt;SecureBridge&lt;/code&gt; in &lt;code&gt;crates/blueprint-remote-providers/src/secure_bridge.rs&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The endpoint data structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;RemoteEndpoint&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;instance_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// cloud instance ID&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;         &lt;span class="c1"&gt;// hostname or IP&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;            &lt;span class="c1"&gt;// blueprint service port&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;use_tls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;        &lt;span class="c1"&gt;// TLS for this connection&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;service_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;blueprint_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;SecureBridgeConfig&lt;/code&gt; defaults to mTLS on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="nb"&gt;Default&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;SecureBridgeConfig&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;Self&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;enable_mtls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;connect_timeout_secs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;idle_timeout_secs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;max_connections_per_endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In production (&lt;code&gt;BLUEPRINT_ENV=production&lt;/code&gt;), certificate presence is enforced — the bridge fails hard if &lt;code&gt;BLUEPRINT_CLIENT_CERT_PATH&lt;/code&gt;, &lt;code&gt;BLUEPRINT_CLIENT_KEY_PATH&lt;/code&gt;, or &lt;code&gt;BLUEPRINT_CA_CERT_PATH&lt;/code&gt; don't resolve to valid PEM files. In development, it falls back to system certs with a warning. Disabling mTLS entirely fails in production:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;is_production&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;ConfigurationError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="s"&gt;"mTLS cannot be disabled in production environment"&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There's also SSRF protection on endpoint registration. Endpoints that resolve to public IPs are rejected — only loopback and private ranges are accepted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;validate_endpoint_security&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;RemoteEndpoint&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="py"&gt;.parse&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;net&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;IpAddr&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;match&lt;/span&gt; &lt;span class="n"&gt;ip&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;net&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;IpAddr&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;V4&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ipv4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;ipv4&lt;/span&gt;&lt;span class="nf"&gt;.is_loopback&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;ipv4&lt;/span&gt;&lt;span class="nf"&gt;.is_private&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;ConfigurationError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                        &lt;span class="s"&gt;"Remote endpoints must use localhost or private IP ranges only"&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
                    &lt;span class="p"&gt;));&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="c1"&gt;// ...&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means the SecureBridge is designed for tunneling to instances on private networks — not direct public internet exposure. The cloud provider's network layer (VPC, private subnet, VPN) is the outer security boundary; SecureBridge handles authentication within that perimeter.&lt;/p&gt;

&lt;h2&gt;
  
  
  X402Gateway as BackgroundService
&lt;/h2&gt;

&lt;p&gt;Here's where the deployment model and the payment model intersect. &lt;code&gt;X402Gateway&lt;/code&gt; implements &lt;code&gt;BackgroundService&lt;/code&gt;. From &lt;code&gt;crates/x402/src/gateway.rs&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="n"&gt;BackgroundService&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;X402Gateway&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nn"&gt;oneshot&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Receiver&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;RunnerError&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;RunnerError&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;oneshot&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;router&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="nf"&gt;.build_router&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.config.bind_address&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="nn"&gt;tokio&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;spawn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;move&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nn"&gt;tracing&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nd"&gt;info!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"x402 payment gateway starting"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="c1"&gt;// ... GC task for expired quotes ...&lt;/span&gt;
            &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;listener&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;tokio&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;net&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;TcpListener&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nn"&gt;axum&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;router&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;build_router&lt;/code&gt; method registers four route families:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nn"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/x402/jobs/{service_id}/{job_index}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;handle_job_request&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/x402/health"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;health_check&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/x402/stats"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;get_stats&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/x402/jobs/{service_id}/{job_index}/auth-dry-run"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;post_auth_dry_run&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/x402/jobs/{service_id}/{job_index}/price"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;get_job_price&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;POST /x402/jobs/{service_id}/{job_index}&lt;/code&gt; is the payment-gated execution endpoint. The &lt;code&gt;X402Middleware&lt;/code&gt; intercepts this route: absent payment header returns 402 with settlement options; present header is verified via the facilitator; settlement completes before the handler runs.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GET /x402/jobs/{service_id}/{job_index}/price&lt;/code&gt; is the discovery endpoint. Clients call this first to learn what payment is required — network, token, amount — without triggering execution.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;POST /x402/jobs/{service_id}/{job_index}/auth-dry-run&lt;/code&gt; lets callers check &lt;code&gt;RestrictedPaid&lt;/code&gt; access control eligibility before spending a payment. Useful for wallets that want to show the user whether they're permitted before presenting the payment UI.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GET /x402/stats&lt;/code&gt; exposes the &lt;code&gt;GatewayCounters&lt;/code&gt; snapshot: accepted payments, policy denials, replay guard hits, enqueue failures. Low-overhead observability without external instrumentation.&lt;/p&gt;

&lt;p&gt;The x402 server binds to a separate address from any primary runner ports (default &lt;code&gt;0.0.0.0:8402&lt;/code&gt;), configured in &lt;code&gt;X402Config.bind_address&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Producer Channel: How Payments Become JobCalls
&lt;/h2&gt;

&lt;p&gt;The mechanical connection between the HTTP server and the job runner is &lt;code&gt;X402Producer&lt;/code&gt;. From &lt;code&gt;crates/x402/src/producer.rs&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;X402Producer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;rx&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;mpsc&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;UnboundedReceiver&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;VerifiedPayment&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="n"&gt;X402Producer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nn"&gt;mpsc&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;UnboundedSender&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;VerifiedPayment&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;mpsc&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;unbounded_channel&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;Self&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;rx&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="n"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="n"&gt;Stream&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;X402Producer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;JobCall&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BoxError&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;poll_next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Pin&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cx&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt;'_&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Poll&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Item&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;match&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.rx&lt;/span&gt;&lt;span class="nf"&gt;.poll_recv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nn"&gt;Poll&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Ready&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payment&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nn"&gt;Poll&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Ready&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payment&lt;/span&gt;&lt;span class="nf"&gt;.into_job_call&lt;/span&gt;&lt;span class="p"&gt;()))),&lt;/span&gt;
            &lt;span class="nn"&gt;Poll&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Ready&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nn"&gt;Poll&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Ready&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;None&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="nn"&gt;Poll&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Pending&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nn"&gt;Poll&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Pending&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;X402Gateway::new&lt;/code&gt; creates both sides of this channel internally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;X402Config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;job_pricing&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;HashMap&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;u32&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;U256&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X402Producer&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;X402Error&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;producer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;payment_tx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;X402Producer&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;producer&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;payment_tx&lt;/code&gt; sender lives inside the gateway. When a payment verifies and settles, the handler sends a &lt;code&gt;VerifiedPayment&lt;/code&gt; through &lt;code&gt;payment_tx&lt;/code&gt;. The &lt;code&gt;X402Producer&lt;/code&gt; (which holds &lt;code&gt;rx&lt;/code&gt;) is wired into the runner as a standard producer — the runner polls it alongside &lt;code&gt;TangleProducer&lt;/code&gt; or any other source.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;VerifiedPayment.into_job_call()&lt;/code&gt; stamps the resulting &lt;code&gt;JobCall&lt;/code&gt; with metadata that job handlers can inspect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_QUOTE_DIGEST_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-X402-QUOTE-DIGEST"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_PAYMENT_NETWORK_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-X402-PAYMENT-NETWORK"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_PAYMENT_TOKEN_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-X402-PAYMENT-TOKEN"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_ORIGIN_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-X402-ORIGIN"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_SERVICE_ID_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-TANGLE-SERVICE-ID"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_CALL_ID_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-TANGLE-CALL-ID"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_CALLER_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-TANGLE-CALLER"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The job handler receives a &lt;code&gt;JobCall&lt;/code&gt; identical in shape to one from Tangle, but the metadata tells it the call came from x402 and on which chain. Nothing in the dispatch path treats x402 jobs differently — the router doesn't know or care.&lt;/p&gt;

&lt;h2&gt;
  
  
  X402InvocationMode: Per-Job Access Control
&lt;/h2&gt;

&lt;p&gt;Each job's x402 accessibility is independently configured via &lt;code&gt;X402InvocationMode&lt;/code&gt;. From &lt;code&gt;crates/x402/src/config.rs&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="n"&gt;X402InvocationMode&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;#[default]&lt;/span&gt;
    &lt;span class="n"&gt;Disabled&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;PublicPaid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;RestrictedPaid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Disabled&lt;/code&gt; (the default) means the gateway will return 403 for this job even if payment is provided — the job is not reachable via x402. &lt;code&gt;PublicPaid&lt;/code&gt; is open to anyone who can pay. &lt;code&gt;RestrictedPaid&lt;/code&gt; adds an &lt;code&gt;isPermittedCaller&lt;/code&gt; check against a Tangle contract before execution.&lt;/p&gt;

&lt;p&gt;For &lt;code&gt;RestrictedPaid&lt;/code&gt;, &lt;code&gt;X402CallerAuthMode&lt;/code&gt; determines how caller identity is asserted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="n"&gt;X402CallerAuthMode&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;#[default]&lt;/span&gt;
    &lt;span class="n"&gt;PayerIsCaller&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;DelegatedCallerSignature&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;PaymentOnly&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// invalid for RestrictedPaid — config validation rejects this&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;PayerIsCaller&lt;/code&gt; infers identity from the settled payment's payer address — simplest, works for wallets that pay for themselves. &lt;code&gt;DelegatedCallerSignature&lt;/code&gt; supports scenarios where an agent pays on behalf of a user: the agent includes &lt;code&gt;X-TANGLE-CALLER&lt;/code&gt;, &lt;code&gt;X-TANGLE-CALLER-SIG&lt;/code&gt;, &lt;code&gt;X-TANGLE-CALLER-NONCE&lt;/code&gt;, and &lt;code&gt;X-TANGLE-CALLER-EXPIRY&lt;/code&gt; headers. The gateway verifies the signature and runs the Tangle permission check against the declared caller, not the payer.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RestrictedPaid&lt;/code&gt; requires both &lt;code&gt;tangle_rpc_url&lt;/code&gt; and &lt;code&gt;tangle_contract&lt;/code&gt; in the &lt;code&gt;JobPolicyConfig&lt;/code&gt;. Config validation rejects &lt;code&gt;RestrictedPaid&lt;/code&gt; with &lt;code&gt;PaymentOnly&lt;/code&gt; auth or missing RPC config at startup.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Builder Pattern
&lt;/h2&gt;

&lt;p&gt;Here's how these pieces compose in &lt;code&gt;BlueprintRunner&lt;/code&gt;. From &lt;code&gt;crates/runner/src/lib.rs&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;background_service&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="n"&gt;BackgroundService&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="k"&gt;'static&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.background_services&lt;/span&gt;&lt;span class="nf"&gt;.push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;DynBackgroundService&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;boxed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="k"&gt;self&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;producer&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;E&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;producer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;JobCall&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;E&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nb"&gt;Send&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nb"&gt;Unpin&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="k"&gt;'static&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The x402 wiring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;X402Config&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_toml&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"x402.toml"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;job_pricing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="cm"&gt;/* HashMap&amp;lt;(u64, u32), U256&amp;gt; from your pricing config */&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;producer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;X402Gateway&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;job_pricing&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nn"&gt;BlueprintRunner&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tangle_config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.router&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;router&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.producer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tangle_producer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="c1"&gt;// on-chain job source&lt;/span&gt;
    &lt;span class="nf"&gt;.producer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;producer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;          &lt;span class="c1"&gt;// x402 payment job source&lt;/span&gt;
    &lt;span class="nf"&gt;.background_service&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// x402 HTTP server&lt;/span&gt;
    &lt;span class="nf"&gt;.run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both producers are polled concurrently in the same event loop. The gateway runs in its own &lt;code&gt;tokio::spawn&lt;/code&gt;. The runner manages their lifetimes uniformly — when the runner shuts down, the &lt;code&gt;oneshot::Receiver&lt;/code&gt; from each background service's &lt;code&gt;start()&lt;/code&gt; signals completion or error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Decision Tree
&lt;/h2&gt;

&lt;p&gt;Putting the topology choices together:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DockerLocal + no SecureBridge&lt;/strong&gt;: Development. No provisioning, no TLS tunnel, no external cloud credentials needed. Run &lt;code&gt;cargo run&lt;/code&gt; locally, test x402 payment flow with a local facilitator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VirtualMachine (AWS/GCP/DigitalOcean/Vultr/Hetzner) + SecureBridge mTLS&lt;/strong&gt;: Production single-operator. The manager SSHes into provisioned VMs, starts containers, registers &lt;code&gt;RemoteEndpoint&lt;/code&gt; in SecureBridge. The x402 gateway binds on the remote VM and is reachable via the provider's networking. SecureBridge handles manager-to-instance communication authenticated with client certs from &lt;code&gt;/etc/blueprint/certs/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ManagedKubernetes (EKS/GKE/AKS) + CloudProvider load balancer&lt;/strong&gt;: Multi-job, horizontally scaled. EKS and AKS get &lt;code&gt;LoadBalancer&lt;/code&gt; service type. GKE gets &lt;code&gt;ClusterIP&lt;/code&gt; with an Ingress resource. The x402 gateway runs as a BackgroundService inside each pod — payment ingress is co-located with the job runner, not a separate service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GenericKubernetes + tunnel&lt;/strong&gt;: Existing clusters, staging environments, or on-prem. The &lt;code&gt;context: Option&amp;lt;String&amp;gt;&lt;/code&gt; field lets you switch kubeconfig contexts without modifying the default. Paired with &lt;code&gt;CloudProvider::Generic&lt;/code&gt;, which &lt;code&gt;requires_tunnel()&lt;/code&gt; — you need to handle the private networking layer yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BareMetal&lt;/strong&gt;: Bare metal SSH hosts. The &lt;code&gt;BareMetal(Vec&amp;lt;String&amp;gt;)&lt;/code&gt; variant takes a list of SSH-accessible hosts. Also &lt;code&gt;requires_tunnel()&lt;/code&gt; — no managed networking, SecureBridge provides the auth layer within your existing network perimeter.&lt;/p&gt;

&lt;p&gt;The x402 gateway placement is the same in all of these: &lt;code&gt;.background_service(gateway)&lt;/code&gt; in the builder. The deployment target changes where the process runs and how it's networked. The payment ingress layer is identical code regardless of where the runner executes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Buys You
&lt;/h2&gt;

&lt;p&gt;The BackgroundService model solves a real operational problem: operator upgrade paths don't require payment gateway downtime. When you roll a new version of your blueprint, the runner and its BackgroundServices restart together. There's no separately managed payment proxy to upgrade, no version skew between the proxy and the business logic it fronts.&lt;/p&gt;

&lt;p&gt;It also means x402 payment ingress scales with your runner. Add more pods — you get more payment capacity. No centralized payment router to scale separately, no single point of failure in the payment path.&lt;/p&gt;

&lt;p&gt;The tradeoff is that the x402 HTTP endpoint is co-located with the job runner rather than edge-deployed. If you need CDN-level caching of price discovery responses or geographic distribution of payment ingress endpoints, you'd put a lightweight HTTP proxy in front. The price and auth-dry-run endpoints are stateless and safe to cache. The job execution endpoint requires the live runner — that one can't be edge-cached by definition.&lt;/p&gt;

&lt;p&gt;For most Blueprint service operators — particularly ones using Tangle for decentralized AVS infrastructure — the co-location model is the right default. The runner is already the first layer worth scaling. Adding another independent service to manage introduces complexity that only pays off at scales most early-stage services won't hit.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;S2-09 in the x402 Production Runway series. Previous: &lt;a href="https://dev.to/post/blueprint-x402-operator-economics-distribution"&gt;Operator Economics and Fee Distribution&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>x402</category>
      <category>blueprintsdk</category>
      <category>deployment</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>x402 and TEE Together: What Must Pass Before Promotion</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Mon, 30 Mar 2026 21:48:16 +0000</pubDate>
      <link>https://dev.to/tangle_network/x402-and-tee-together-what-must-pass-before-promotion-c8p</link>
      <guid>https://dev.to/tangle_network/x402-and-tee-together-what-must-pass-before-promotion-c8p</guid>
      <description>&lt;h2&gt;
  
  
  The Default That Will Burn You in Production
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;on_chain_verification&lt;/code&gt; defaults to &lt;code&gt;false&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That single field in &lt;code&gt;TeeKeyExchangeConfig&lt;/code&gt; is a production trap. With it disabled, a key exchange succeeds if the attestation report passes local structural checks — provider match, measurement comparison, debug mode flag — but nobody verifies that the attestation corresponds to the hash submitted on-chain at provision time. A compromised operator can substitute a different TEE's attestation during key exchange. The keys are exchanged. Secrets are injected. Nothing fails. The attestation mismatch is invisible.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// crates/tee/src/config.rs&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;TeeKeyExchangeConfig&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;session_ttl_secs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;max_sessions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;usize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="cd"&gt;/// When enabled, key exchange performs dual verification:&lt;/span&gt;
    &lt;span class="cd"&gt;/// 1. Local evidence check: is this a real TEE with the right measurement?&lt;/span&gt;
    &lt;span class="cd"&gt;/// 2. On-chain hash comparison: does this attestation match the hash submitted&lt;/span&gt;
    &lt;span class="cd"&gt;///    at provision time (keccak256(attestationJsonBytes) stored in contract)?&lt;/span&gt;
    &lt;span class="cd"&gt;///&lt;/span&gt;
    &lt;span class="cd"&gt;/// This prevents a compromised operator from substituting a different TEE's&lt;/span&gt;
    &lt;span class="cd"&gt;/// attestation during key exchange.&lt;/span&gt;
    &lt;span class="nd"&gt;#[serde(default)]&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;on_chain_verification&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="nb"&gt;Default&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;TeeKeyExchangeConfig&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;Self&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;session_ttl_secs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;max_sessions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;on_chain_verification&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// must be set true in production&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is not a papercut. It is the difference between a TEE system that can be spoofed and one that cannot. Every other gate below depends on knowing which enclave you are actually talking to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Gates, Not One
&lt;/h2&gt;

&lt;p&gt;Most Blueprint developers treat payment gating and TEE attestation as orthogonal concerns. Configure &lt;code&gt;X402Middleware&lt;/code&gt; for payments, wire up &lt;code&gt;TeeLayer&lt;/code&gt; for attestation, ship it. But in production, they compose into a single promotion condition. A service request that passes payment verification but comes from a debug-mode enclave is not production-safe. A TEE attestation that checks out locally but was never cross-referenced against the on-chain hash is not production-safe either.&lt;/p&gt;

&lt;p&gt;The on-chain promotion path makes this concrete. A service moves from &lt;code&gt;PendingRequest&lt;/code&gt; to &lt;code&gt;Active&lt;/code&gt; only when &lt;code&gt;approvalCount == operatorCount&lt;/code&gt; in &lt;code&gt;ServicesApprovals.sol&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// tnt-core/src/core/ServicesApprovals.sol
function approveService(uint64 requestId, uint8 stakingPercent) external whenNotPaused nonReentrant {
    // ...
    _requestApprovals[requestId][msg.sender] = true;
    req.approvalCount++;

    emit ServiceApproved(requestId, msg.sender);

    if (req.approvalCount == req.operatorCount) {
        _activateService(requestId);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;_activateService&lt;/code&gt; is only called when every operator in the request has approved. A service with three operators named in the request does not activate after one approval. It does not activate after two. It activates exactly once: when the count matches the operator count. Payment and staking verification happen before an operator can even submit an approval — an operator that fails &lt;code&gt;_staking.isOperatorActive(msg.sender)&lt;/code&gt; is rejected at the gate, before the count advances.&lt;/p&gt;

&lt;p&gt;This is gate one. Gate two is inside the enclave.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Eight Conditions
&lt;/h2&gt;

&lt;p&gt;Before a Blueprint service with &lt;code&gt;ConfidentialityPolicy::TeeRequired&lt;/code&gt; should be trusted in production, eight conditions must hold simultaneously. Failing any one of them means the service should not be promoted.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. debug_mode must be false
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// crates/tee/src/attestation/claims.rs&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;AttestationClaims&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="cd"&gt;/// Whether the TEE is running in debug mode.&lt;/span&gt;
    &lt;span class="cd"&gt;/// Debug mode enclaves should never be trusted in production.&lt;/span&gt;
    &lt;span class="nd"&gt;#[serde(default)]&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;debug_mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The comment is the policy: debug mode enclaves should never be trusted in production. A Nitro enclave built with &lt;code&gt;--debug-mode&lt;/code&gt; produces attestation that any verifier can inspect and any tool can spoof. The hardware isolation properties do not hold. This field is populated from the raw attestation document — if the enclave set the debug flag, &lt;code&gt;debug_mode&lt;/code&gt; is true, and promotion should be rejected.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. on_chain_verification must be enabled
&lt;/h3&gt;

&lt;p&gt;Already covered above. Without it, local attestation verification is the only check, and local verification cannot detect attestation substitution by a compromised operator. This must be &lt;code&gt;true&lt;/code&gt; in &lt;code&gt;TeeKeyExchangeConfig&lt;/code&gt; for any production deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Attestation freshness: is_expired() must return false
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// crates/tee/src/attestation/report.rs&lt;/span&gt;
&lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="n"&gt;AttestationReport&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;is_expired&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_age_secs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;time&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;SystemTime&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;.duration_since&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;time&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;UNIX_EPOCH&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;.map&lt;/span&gt;&lt;span class="p"&gt;(|&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="nf"&gt;.as_secs&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
            &lt;span class="nf"&gt;.unwrap_or&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="nf"&gt;.saturating_sub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.issued_at_unix&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;max_age_secs&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The default &lt;code&gt;max_attestation_age_secs&lt;/code&gt; in &lt;code&gt;TeeConfig&lt;/code&gt; is 3600 (one hour). An expired attestation means the report was generated more than an hour ago. The enclave may have been rebooted, reprovisioned with a different binary, or had its measurement changed. Any of those invalidate the on-chain hash comparison. &lt;code&gt;is_expired&lt;/code&gt; must return &lt;code&gt;false&lt;/code&gt; before key exchange is permitted.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Measurement pinning: the Measurement digest must match
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// crates/tee/src/attestation/report.rs&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;Measurement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;algorithm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;digest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// normalized to lowercase hex&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;AttestationReport&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;measurement&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Measurement&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The attestation report carries a hardware measurement — PCR values on Nitro, MRTD on TDX, the equivalent on SEV-SNP. At provision time, this measurement hash is submitted on-chain as &lt;code&gt;keccak256(attestationJsonBytes)&lt;/code&gt;. During key exchange with &lt;code&gt;on_chain_verification: true&lt;/code&gt;, the verifier re-fetches that on-chain hash and checks it against the current report's evidence digest. If the enclave binary changed between provision and key exchange, the measurement changes, the evidence digest changes, and the on-chain comparison fails. Measurement pinning is the mechanism that makes TEE deployments immutable from the operator's perspective.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. SecretInjectionPolicy must be SealedOnly
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// crates/tee/src/config.rs&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="n"&gt;SecretInjectionPolicy&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="cd"&gt;/// Only valid for non-TEE (container) deployments.&lt;/span&gt;
    &lt;span class="n"&gt;EnvOrSealed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="cd"&gt;/// Container recreation is forbidden. Mandatory for all TEE deployments.&lt;/span&gt;
    &lt;span class="n"&gt;SealedOnly&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The builder enforces this at construction time — any &lt;code&gt;TeeConfig&lt;/code&gt; with a non-Disabled mode automatically gets &lt;code&gt;SealedOnly&lt;/code&gt;. But configs deserialized from JSON or TOML go through &lt;code&gt;validate()&lt;/code&gt; which applies the same check. The reason is not just security hygiene: env-var injection via container recreation invalidates attestation, breaks sealed secrets, and loses the on-chain deployment ID. If a deployed TEE service can have its secrets changed via environment variable, the entire attestation chain is moot.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Source filtering: ConfidentialityPolicy::TeeRequired restricts to container sources only
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// crates/manager/src/protocol/tangle/event_handler.rs&lt;/span&gt;
&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;supports_tee&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;BlueprintSource&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;matches!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nn"&gt;BlueprintSource&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Container&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;ordered_source_indices&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;sources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;BlueprintSource&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;preferred_source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;SourceType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;confidentiality_policy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ConfidentialityPolicy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Vec&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;usize&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;require_tee&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nd"&gt;matches!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;confidentiality_policy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nn"&gt;ConfidentialityPolicy&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TeeRequired&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;indexed&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Vec&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;usize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sources&lt;/span&gt;
        &lt;span class="nf"&gt;.iter&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.enumerate&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.filter&lt;/span&gt;&lt;span class="p"&gt;(|(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;)|&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;require_tee&lt;/span&gt; &lt;span class="p"&gt;||&lt;/span&gt; &lt;span class="nf"&gt;supports_tee&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When &lt;code&gt;ConfidentialityPolicy::TeeRequired&lt;/code&gt;, the event handler filters available sources to containers only. GitHub binaries and remote URLs are excluded — they run as native processes outside any enclave boundary. If the blueprint exposes no container source, the manager returns &lt;code&gt;Error::TeeRuntimeUnavailable&lt;/code&gt; and the service does not start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nd"&gt;matches!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="py"&gt;.confidentiality_policy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nn"&gt;ConfidentialityPolicy&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TeeRequired&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;ordered_source_idxs&lt;/span&gt;&lt;span class="nf"&gt;.is_empty&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TeeRuntimeUnavailable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Blueprint requires TEE execution but exposes no container source"&lt;/span&gt;
            &lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7. Operator approval count must equal operator count
&lt;/h3&gt;

&lt;p&gt;Covered in the contract section above. &lt;code&gt;req.approvalCount == req.operatorCount&lt;/code&gt; is the exact condition. All operators named in the service request must call &lt;code&gt;approveService&lt;/code&gt; before &lt;code&gt;_activateService&lt;/code&gt; fires. A partial approval set means some operators have not committed their stake and agreed to the service parameters. The service does not go Active until the set is complete.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Payment and staking verified
&lt;/h3&gt;

&lt;p&gt;Before any operator can submit an approval, &lt;code&gt;_staking.isOperatorActive(msg.sender)&lt;/code&gt; must return true. The staking contract enforces that the operator has an active stake on-chain. Without active stake, the approval call reverts with &lt;code&gt;OperatorNotActive&lt;/code&gt;. Payment verification on the client side happens before the initial service request is submitted — x402 payment headers are validated by the operator's gateway before the &lt;code&gt;JobCall&lt;/code&gt; enters the execution pipeline. By the time a service request reaches the approval phase, the payment commitment is already part of the signed request that operators are approving.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Operators Get Cryptographic Proof
&lt;/h2&gt;

&lt;p&gt;Once a service is Active and jobs are running, operators need to know which enclave ran each job. This is what &lt;code&gt;TeeLayer&lt;/code&gt; provides.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// crates/tee/src/middleware/tee_layer.rs&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;TEE_ATTESTATION_DIGEST_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"tee.attestation.digest"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;TEE_PROVIDER_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"tee.provider"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;TEE_MEASUREMENT_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"tee.measurement"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;TeeLayer&lt;/code&gt; is a Tower middleware layer. On every successful &lt;code&gt;JobResult::Ok&lt;/code&gt;, it injects three metadata keys into the result head: the SHA-256 digest of the attestation evidence, the provider name, and the measurement string. Operators receive this in the job result and can independently verify: re-hash the evidence bytes and compare against the stored &lt;code&gt;TEE_ATTESTATION_DIGEST_KEY&lt;/code&gt;. If the hash matches what was submitted at provision time, the operator has cryptographic proof that this specific job ran inside the specific enclave they approved.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// crates/tee/src/middleware/tee_layer.rs&lt;/span&gt;
&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;poll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Pin&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cx&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt;'_&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Poll&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Output&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="k"&gt;match&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nn"&gt;JobResult&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;Ok&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;..&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nn"&gt;Poll&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Ready&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;
            &lt;span class="p"&gt;};&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;digest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;this&lt;/span&gt;&lt;span class="py"&gt;.attestation_digest&lt;/span&gt;&lt;span class="nf"&gt;.take&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="py"&gt;.metadata&lt;/span&gt;&lt;span class="nf"&gt;.insert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TEE_ATTESTATION_DIGEST_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;digest&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;this&lt;/span&gt;&lt;span class="py"&gt;.provider&lt;/span&gt;&lt;span class="nf"&gt;.take&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="py"&gt;.metadata&lt;/span&gt;&lt;span class="nf"&gt;.insert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TEE_PROVIDER_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;measurement&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;this&lt;/span&gt;&lt;span class="py"&gt;.measurement&lt;/span&gt;&lt;span class="nf"&gt;.take&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="py"&gt;.metadata&lt;/span&gt;&lt;span class="nf"&gt;.insert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TEE_MEASUREMENT_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;measurement&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

            &lt;span class="nn"&gt;Poll&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Ready&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nb"&gt;None&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nn"&gt;Poll&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Ready&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The attestation is injected from a shared &lt;code&gt;Arc&amp;lt;Mutex&amp;lt;Option&amp;lt;AttestationReport&amp;gt;&amp;gt;&amp;gt;&lt;/code&gt;. The layer uses &lt;code&gt;try_lock&lt;/code&gt; on the hot path to avoid blocking the service call. If the lock is contended, the keys are omitted from that result with a warning — a tradeoff that keeps job execution from stalling on attestation state reads.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Key Exchange Session Lifecycle
&lt;/h2&gt;

&lt;p&gt;Sealed secrets are how configuration, API keys, and model weights enter a TEE without the operator ever seeing them. The flow is two-phase: TEE generates an ephemeral X25519 keypair, attests the public key, and the client encrypts secrets to it after verifying the attestation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// crates/tee/src/exchange/protocol.rs&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;KeyExchangeSession&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;public_key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Vec&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;private_key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Vec&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// zeroed on drop&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;created_at&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;ttl_secs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="nb"&gt;Drop&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;KeyExchangeSession&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.private_key&lt;/span&gt;&lt;span class="nf"&gt;.zeroize&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Private keys are zeroed on drop via &lt;code&gt;zeroize&lt;/code&gt;. The default session TTL is 300 seconds. Sessions are one-time use — &lt;code&gt;consume_session&lt;/code&gt; atomically removes the session from the map on consumption, so the same session ID cannot be replayed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// crates/tee/src/exchange/service.rs&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;consume_session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;KeyExchangeSession&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;TeeError&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;sessions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.sessions&lt;/span&gt;&lt;span class="nf"&gt;.lock&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sessions&lt;/span&gt;
        &lt;span class="nf"&gt;.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;.ok_or_else&lt;/span&gt;&lt;span class="p"&gt;(||&lt;/span&gt; &lt;span class="nn"&gt;TeeError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;KeyExchange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;format!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"session not found: {session_id}"&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="nf"&gt;.is_expired&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;sessions&lt;/span&gt;&lt;span class="nf"&gt;.remove&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;TeeError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;KeyExchange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;format!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"session expired: {session_id}"&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// Session is valid — remove and return it (one-time use)&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sessions&lt;/span&gt;
        &lt;span class="nf"&gt;.remove&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;.expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"session exists; checked above"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A background cleanup task runs every &lt;code&gt;max(ttl_secs, 30)&lt;/code&gt; seconds to evict expired sessions that were never consumed. The &lt;code&gt;TeeAuthService&lt;/code&gt; holds an &lt;code&gt;AbortHandle&lt;/code&gt; to this task, which is cancelled on drop — no orphaned background tasks if the service is torn down.&lt;/p&gt;

&lt;p&gt;The maximum concurrent sessions defaults to 64. If the limit is reached, &lt;code&gt;create_session&lt;/code&gt; evicts expired sessions first before rejecting with a capacity error. This prevents session exhaustion from stale entries accumulating.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Production Checklist
&lt;/h2&gt;

&lt;p&gt;Putting it together: a Blueprint service with &lt;code&gt;ConfidentialityPolicy::TeeRequired&lt;/code&gt; is production-ready when:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Condition&lt;/th&gt;
&lt;th&gt;Where enforced&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;debug_mode: false&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;AttestationClaims&lt;/code&gt; from hardware&lt;/td&gt;
&lt;td&gt;false&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;on_chain_verification: true&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;TeeKeyExchangeConfig&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;false — must override&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Attestation not expired&lt;/td&gt;
&lt;td&gt;&lt;code&gt;AttestationReport::is_expired()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;3600s max age&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Measurement matches on-chain&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;on_chain_verification&lt;/code&gt; flow&lt;/td&gt;
&lt;td&gt;disabled by default&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;SecretInjectionPolicy::SealedOnly&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;TeeConfig&lt;/code&gt; builder&lt;/td&gt;
&lt;td&gt;auto-enforced when TEE enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container source available&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;ordered_source_indices&lt;/code&gt; filter&lt;/td&gt;
&lt;td&gt;fails hard if missing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;All operators approved&lt;/td&gt;
&lt;td&gt;&lt;code&gt;approvalCount == operatorCount&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;contract-enforced&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Operator staking active&lt;/td&gt;
&lt;td&gt;&lt;code&gt;_staking.isOperatorActive&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;contract-enforced&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Six of eight are enforced automatically by the type system or contracts. The two that require explicit action: &lt;code&gt;on_chain_verification&lt;/code&gt; must be set to &lt;code&gt;true&lt;/code&gt;, and the blueprint must expose a container source when TEE is required. Everything else fails closed.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Previous in the series: &lt;a href="https://dev.to/post/on-chain-rfq-job-quotes-verification-slashing"&gt;RFQ, Job Quotes, and On-Chain Verification&lt;/a&gt;. Next: TBD.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>x402</category>
      <category>tee</category>
      <category>security</category>
      <category>blueprint</category>
    </item>
    <item>
      <title>Operator Economics After Payment: Distribution, Exposure Weighting, and Fee Routing</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Sat, 28 Mar 2026 18:43:42 +0000</pubDate>
      <link>https://dev.to/tangle_network/operator-economics-after-payment-distribution-exposure-weighting-and-fee-routing-4cd6</link>
      <guid>https://dev.to/tangle_network/operator-economics-after-payment-distribution-exposure-weighting-and-fee-routing-4cd6</guid>
      <description>&lt;p&gt;When a client pays an x402 Blueprint job, the USDC moves to the operator's &lt;code&gt;pay_to&lt;/code&gt; address on the settlement chain. That's the end of the client's story. It's the beginning of the operator's.&lt;/p&gt;

&lt;p&gt;The payment lands as a cross-chain deposit. Before operators can claim anything, the Tangle protocol has to decide who gets what. An operator running a high-stakes Blueprint with a large staking pool behind it gets a different cut than one running the same Blueprint with minimal backing. The distribution isn't arbitrary — it's driven by exposure, USD-weighted delegation scores, and blueprint selection mode. This post opens that system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where payments enter the distribution layer
&lt;/h2&gt;

&lt;p&gt;After x402 settlement, the Tangle contract calls &lt;code&gt;distributeServiceFee&lt;/code&gt; on &lt;code&gt;ServiceFeeDistributor&lt;/code&gt;. This is the on-chain entry point for all service-fee revenue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function distributeServiceFee(
    uint64 serviceId,
    uint64 blueprintId,
    address operator,
    address paymentToken,
    uint256 amount
)
    external
    payable
    override
    nonReentrant
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Only the Tangle contract can call this — &lt;code&gt;msg.sender != tangle&lt;/code&gt; reverts. The function handles both native ETH (&lt;code&gt;paymentToken == address(0)&lt;/code&gt;) and ERC-20 tokens. For ERC-20, it expects no &lt;code&gt;msg.value&lt;/code&gt;; for native, it requires &lt;code&gt;msg.value == amount&lt;/code&gt;. This prevents a class of accounting bugs where msg.value doesn't match what the caller claimed to send.&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://github.com/tangle-network/tnt-core/blob/main/src/rewards/ServiceFeeDistributor.sol" rel="noopener noreferrer"&gt;&lt;code&gt;tnt-core/src/rewards/ServiceFeeDistributor.sol&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming vs immediate distribution
&lt;/h2&gt;

&lt;p&gt;The first branch after validation isn't about who gets the fee — it's about &lt;em&gt;when&lt;/em&gt; they get it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Types.Service memory svc = ITangleSecurityView(tangle).getService(serviceId);
if (svc.ttl &amp;gt; 0 &amp;amp;&amp;amp; address(streamingManager) != address(0)) {
    uint64 startTime = svc.createdAt;
    uint64 endTime = svc.createdAt + svc.ttl;
    if (block.timestamp &amp;gt; startTime) {
        startTime = uint64(block.timestamp);
    }
    if (endTime &amp;gt; startTime) {
        IERC20(paymentToken).safeTransfer(address(streamingManager), amount);
        streamingManager.createStream{ value: paymentToken == address(0) ? amount : 0 }(
            serviceId, blueprintId, operator, paymentToken, amount, startTime, endTime
        );
        return;
    }
}
_distributeImmediate(serviceId, blueprintId, operator, paymentToken, amount);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the service has a TTL and &lt;code&gt;StreamingPaymentManager&lt;/code&gt; is configured, the entire payment transfers to the streaming manager and gets distributed pro-rata over the service lifetime. If a delegation changes mid-service, the distributor drips the stream first — paying out at the current score ratios — then applies the delegation change. This prevents retroactive gaming: you can't bond more stake after a job runs and claim a larger slice of revenue that was already earned at lower backing levels.&lt;/p&gt;

&lt;p&gt;For services without TTL (or if streaming isn't configured), distribution is immediate via &lt;code&gt;_distributeImmediate&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How drip math works
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;StreamingPaymentManager&lt;/code&gt; holds the full fee amount and releases chunks over time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;durationSeconds = currentTime - p.lastDripTime;
uint256 duration = p.endTime - p.startTime;
uint256 remaining = p.totalAmount - p.distributed;
uint256 chunk = (p.totalAmount * durationSeconds) / duration;
if (chunk &amp;gt; remaining) {
    chunk = remaining;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each drip is proportional to elapsed time. At 50% of the service lifetime, 50% of the payment is dripable. The dripped chunk gets forwarded back to &lt;code&gt;ServiceFeeDistributor&lt;/code&gt; for immediate distribution at current scores.&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://github.com/tangle-network/tnt-core/blob/main/src/rewards/StreamingPaymentManager.sol" rel="noopener noreferrer"&gt;&lt;code&gt;tnt-core/src/rewards/StreamingPaymentManager.sol&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  USD-weighted scoring: why your asset mix matters
&lt;/h2&gt;

&lt;p&gt;Once funds reach &lt;code&gt;_distributeImmediate&lt;/code&gt;, the contract needs to split the payment across delegators. The split is proportional to &lt;em&gt;USD-weighted score&lt;/em&gt;, not raw token amounts. This is the core economic mechanism.&lt;/p&gt;

&lt;p&gt;Each delegator's score for an operator is tracked in two modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;All mode&lt;/strong&gt; (&lt;code&gt;selectionMode = All&lt;/code&gt;): the delegator's stake covers all blueprints the operator runs. Score accumulates in &lt;code&gt;totalAllScore[operator][assetHash]&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fixed mode&lt;/strong&gt; (&lt;code&gt;selectionMode = Fixed&lt;/code&gt;): the delegator's stake is allocated to specific blueprint IDs. Score accumulates in &lt;code&gt;totalFixedScore[operator][blueprintId][assetHash]&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Score is computed at delegation time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scoreDelta = (amount * lockMultiplierBps) / BPS_DENOMINATOR;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The lock multiplier rewards longer-duration commitments with more score per unit of capital. Score decreases on withdrawal proportional to the principal reduction, preserving the effective rate.&lt;/p&gt;

&lt;p&gt;The fee then splits between All-mode and Fixed-mode pools proportionally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uint256 allAmount = (amount * allUsdTotal) / totalUsd;
uint256 fixedAmount = amount - allAmount;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;code&gt;allUsdTotal&lt;/code&gt; and &lt;code&gt;fixedUsdTotal&lt;/code&gt; are the USD values of effective (post-slash) scores for each pool. Within each pool, fees are distributed using an &lt;strong&gt;accumulator-per-score&lt;/strong&gt; pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;accAllPerScore[operator][assetHash][paymentToken] +=
    (share * PRECISION) / allScore;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is O(1) per asset per payment token — no loop over delegators. Delegators claim by computing &lt;code&gt;score * accPerScore - debtAtLastSync&lt;/code&gt;. A delegator who bonded after a fee was distributed has their debt initialized at the current accumulator level, so they only earn on fees distributed after they joined.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exposure weighting: committed capital drives reward share
&lt;/h2&gt;

&lt;p&gt;For services with &lt;code&gt;AssetSecurityRequirements&lt;/code&gt;, the USD computation adds an exposure layer. Each operator declares how much of their delegation they're willing to have at risk for a service — &lt;code&gt;commitmentBps&lt;/code&gt; — and the exposed amount drives both slashing and fee allocation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uint256 allExposed = (allEffective * commitmentBps) / BPS_DENOMINATOR;
allUsdTotal += _toUsd(a, allExposed);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Higher exposure means higher risk (you can be slashed on more of your stake) and higher reward (you earn a larger share of the service fee). Operators who run low-commitment configurations take less slashing risk but earn proportionally less revenue. The &lt;code&gt;ExposureCalculator&lt;/code&gt; library encodes this directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/// @notice Calculate reward share based on exposure percentage
/// @dev Higher exposure = higher risk = higher reward share
function calculateRewardShare(
    uint256 delegatedAmount,
    uint16 exposureBps,
    uint256 totalReward,
    uint256 totalExposedValue
) internal pure returns (uint256 rewardShare) {
    if (totalExposedValue == 0) return 0;
    uint256 exposedAmount = calculateExposedAmount(delegatedAmount, exposureBps);
    return (totalReward * exposedAmount) / totalExposedValue;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;exposureBps&lt;/code&gt; is bounded between &lt;code&gt;MIN_EXPOSURE_BPS = 1&lt;/code&gt; (0.01%) and &lt;code&gt;MAX_EXPOSURE_BPS = 10_000&lt;/code&gt; (100%), defined in &lt;code&gt;ExposureTypes&lt;/code&gt;. An operator at 5000 bps (50% exposure) earns half of what the same operator at 10000 bps (full exposure) would earn, all else equal.&lt;/p&gt;

&lt;p&gt;The USD weighting uses a price oracle to normalize across assets. An operator backed by ETH and one backed by stablecoins compete on USD-equivalent exposed value, not raw token count.&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://github.com/tangle-network/tnt-core/blob/main/src/exposure/ExposureCalculator.sol" rel="noopener noreferrer"&gt;&lt;code&gt;tnt-core/src/exposure/ExposureCalculator.sol&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TNT score boost
&lt;/h2&gt;

&lt;p&gt;The distributor supports a configurable TNT token score rate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/// @dev If tntScoreRate = 1e18, then 1 TNT = $1 score regardless of actual market price.
/// If tntScoreRate = 0, TNT uses oracle price like other tokens.
uint256 public tntScoreRate;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When &lt;code&gt;tntScoreRate&lt;/code&gt; is set above the oracle price of TNT, delegators backing operators with TNT earn amplified score per dollar of capital — a direct incentive to hold and stake the native token. At &lt;code&gt;tntScoreRate = 1e18&lt;/code&gt; with TNT at $0.10 market price, TNT earns 10× the fee share of its market value relative to other tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Treasury fallback
&lt;/h2&gt;

&lt;p&gt;If an operator has zero stakers or zero USD-weighted score when a fee arrives, the fee doesn't sit in the contract — it routes to the protocol treasury:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (totalUsd == 0) {
    _transferPayment(ITangleSecurityView(tangle).treasury(), paymentToken, amount);
    return;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same behavior if the operator has no asset hashes tracked at all. Fees are never stranded in the distributor.&lt;/p&gt;

&lt;h2&gt;
  
  
  The payment metadata trail
&lt;/h2&gt;

&lt;p&gt;On the Blueprint SDK side, &lt;code&gt;X402Producer&lt;/code&gt; propagates the payment context into the job metadata so that operators can trace which payment triggered which execution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_QUOTE_DIGEST_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-X402-QUOTE-DIGEST"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_PAYMENT_NETWORK_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-X402-PAYMENT-NETWORK"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_PAYMENT_TOKEN_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-X402-PAYMENT-TOKEN"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_ORIGIN_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-X402-ORIGIN"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_SERVICE_ID_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-TANGLE-SERVICE-ID"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;X402_CALL_ID_KEY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"X-TANGLE-CALL-ID"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every job that came from an x402 payment carries its quote digest, payment network (CAIP-2), token, service ID, and a synthetic call ID for tracking. This metadata is consumed by the runner, not by the distribution layer — but it's the operator's audit trail linking an HTTP payment to a specific on-chain service invocation.&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/x402/src/producer.rs" rel="noopener noreferrer"&gt;&lt;code&gt;blueprint/crates/x402/src/producer.rs&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's wired vs what's planned
&lt;/h2&gt;

&lt;p&gt;Two items in the distribution system are architecturally present but not fully live:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security-adjusted pricing.&lt;/strong&gt; &lt;code&gt;calculate_security_rate_adjustment&lt;/code&gt; in the pricing engine returns &lt;code&gt;Decimal::ONE&lt;/code&gt; unconditionally. The hook exists and is wired through &lt;code&gt;calculate_resource_price&lt;/code&gt;, but the implementation is a stub (&lt;code&gt;// TODO: Implement security requirement adjustments&lt;/code&gt;). Premium pricing for high-exposure operators isn't in production yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slashing impact on distribution.&lt;/strong&gt; Slash factors &lt;em&gt;are&lt;/em&gt; live in &lt;code&gt;ServiceFeeDistributor&lt;/code&gt;. &lt;code&gt;_allSlashFactor&lt;/code&gt; and &lt;code&gt;_fixedSlashFactor&lt;/code&gt; are applied via &lt;code&gt;_applySlashFactor&lt;/code&gt; before USD scoring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uint256 allEffective = _applySlashFactor(allScore, _getAllSlashFactor(operator, assetHash));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The slashing mechanism in &lt;code&gt;ServiceFeeDistributor&lt;/code&gt; is operational. What feeds those slash factors — the slashing protocol itself — depends on the staking and governance layer, which is out of scope here.&lt;/p&gt;

&lt;h2&gt;
  
  
  The economic signal in the design
&lt;/h2&gt;

&lt;p&gt;The fee distribution design encodes a specific opinion about what operators should optimize for: committed, long-duration, diversified backing. Operators with high USD-weighted exposure earn more per fee unit. Operators with mixed asset backing benefit from oracle normalization. Operators who attract All-mode delegators (stake that covers the full service roster) earn from a broader pool than operators whose delegators make narrow Fixed-mode bets.&lt;/p&gt;

&lt;p&gt;The streaming payment design punishes late entry. Delegation after a service starts doesn't capture retroactive fees — drips happen at the score ratios that were in place at drip time. This is intended: an operator who can attract capital before a service launches is more valuable to the protocol than one who can attract it after the fact.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Build with Tangle&lt;/strong&gt; | &lt;a href="https://tangle.tools" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://github.com/tangle-network" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://discord.gg/tangle" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; | &lt;a href="https://t.me/tanglenet" rel="noopener noreferrer"&gt;Telegram&lt;/a&gt; | &lt;a href="https://x.com/taborgroup" rel="noopener noreferrer"&gt;X/Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blueprint</category>
      <category>x402</category>
      <category>operators</category>
      <category>staking</category>
    </item>
    <item>
      <title>The Three Numbers That Keep Your Blueprint Online</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Thu, 26 Mar 2026 07:35:26 +0000</pubDate>
      <link>https://dev.to/tangle_network/the-three-numbers-that-keep-your-blueprint-online-346m</link>
      <guid>https://dev.to/tangle_network/the-three-numbers-that-keep-your-blueprint-online-346m</guid>
      <description>&lt;h2&gt;
  
  
  Operator Monitoring and Health Checks for Tangle Blueprint Nodes
&lt;/h2&gt;

&lt;p&gt;Tangle is a programmable infrastructure network where operators run modular services called Blueprints and get selected for paid jobs based on on-chain health signals. Three signals govern whether your operator stays in rotation: on-chain heartbeats submitted through the &lt;code&gt;OperatorStatusRegistry&lt;/code&gt;, quote freshness (5-minute default lifetime, 1-hour protocol maximum), and capacity utilization (&lt;code&gt;activeLoad&lt;/code&gt; vs. &lt;code&gt;maxCapacity&lt;/code&gt;). The orchestrator's health tracker maintains a rolling window of the last 10 job outcomes, requiring a minimum 30% success rate to stay selectable, with a 60-second cooldown after each failure. Fall behind on any one of these and the orchestrator silently skips you. Jobs stop arriving with no error message and no notification.&lt;/p&gt;

&lt;p&gt;Getting a Blueprint operator online is the easy part. Keeping it selected for jobs is where most operators quietly fail. The thresholds governing selection aren't obvious from the outside. They live in the orchestrator's health tracker, in on-chain registry contracts, and in protocol constants spread across multiple config files. This post pulls those numbers into one place and explains how they interact, so you can build monitoring that catches problems before they cost you revenue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlf286jpwlgp7wig8853.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlf286jpwlgp7wig8853.png" alt="Operator health signals: heartbeat, quote freshness, and capacity utilization" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Heartbeats: the on-chain pulse check
&lt;/h2&gt;

&lt;p&gt;Operators prove liveness by submitting heartbeats on-chain through the &lt;code&gt;OperatorStatusRegistry&lt;/code&gt;. Each heartbeat carries a service ID, blueprint ID, status code, and an optional metrics payload. The registry tracks four values per operator per service:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;lastHeartbeat&lt;/code&gt;: timestamp of the most recent submission&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;consecutiveBeats&lt;/code&gt;: how many in a row without a miss&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;missedBeats&lt;/code&gt;: accumulated misses&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;lastMetricsHash&lt;/code&gt;: hash of the most recent metrics payload&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two functions expose the result: &lt;a href="https://github.com/tangle-network/tnt-core/blob/main/src/staking/OperatorStatusRegistry.sol" rel="noopener noreferrer"&gt;&lt;code&gt;isHeartbeatCurrent()&lt;/code&gt;&lt;/a&gt; returns whether the latest heartbeat is within the configured interval, and &lt;a href="https://github.com/tangle-network/tnt-core/blob/main/src/staking/OperatorStatusRegistry.sol" rel="noopener noreferrer"&gt;&lt;code&gt;isOnline()&lt;/code&gt;&lt;/a&gt; combines heartbeat freshness with status code to give a binary liveness signal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Status codes and what they mean
&lt;/h3&gt;

&lt;p&gt;The protocol defines five status codes:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Code&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Range Convention&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;Healthy&lt;/td&gt;
&lt;td&gt;Exactly 0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Degraded&lt;/td&gt;
&lt;td&gt;1-99&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Offline&lt;/td&gt;
&lt;td&gt;100+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Slashed&lt;/td&gt;
&lt;td&gt;200+ (slashable)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Exiting&lt;/td&gt;
&lt;td&gt;Voluntary exit in progress&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The range convention matters. A status of 15 still reads as "degraded" to the registry, but you can use the specific number to encode granularity for your own monitoring. Some operators use values like 10 for "high memory pressure" and 20 for "disk nearing capacity" while staying in the degraded band.&lt;/p&gt;

&lt;p&gt;Each service configures its own &lt;code&gt;HeartbeatConfig&lt;/code&gt; with an &lt;code&gt;interval&lt;/code&gt; (how often heartbeats are expected), a &lt;code&gt;maxMissed&lt;/code&gt; threshold (how many misses before consequences), and an optional &lt;code&gt;customMetrics&lt;/code&gt; flag. This is per-service, not global, so an operator running three different blueprints might have three different heartbeat cadences.&lt;/p&gt;

&lt;h3&gt;
  
  
  Querying your own state
&lt;/h3&gt;

&lt;p&gt;You can inspect your operator's on-chain health at any time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Binary liveness check
registry.isHeartbeatCurrent(serviceId, operatorAddr);
registry.isOnline(serviceId, operatorAddr);

// Full state inspection
registry.getOperatorState(serviceId, operatorAddr);
// Returns: lastHeartbeat, consecutiveBeats, missedBeats, status, lastMetricsHash

// Individual metric values (if customMetrics enabled)
registry.getMetricValue(serviceId, operatorAddr, "cpu_utilization");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;getMetricDefinitions(serviceId)&lt;/code&gt; call returns the schema for a service's expected metrics, including &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;minValue&lt;/code&gt;, &lt;code&gt;maxValue&lt;/code&gt;, and whether each metric is &lt;code&gt;required&lt;/code&gt;. If your blueprint defines required metrics and you stop submitting them, the registry logs the violation on-chain.&lt;/p&gt;

&lt;h2&gt;
  
  
  The health tracker's rolling window
&lt;/h2&gt;

&lt;p&gt;On-chain heartbeats are only half the picture. The orchestrator that routes jobs to operators maintains its own health tracker with a separate, more aggressive set of thresholds.&lt;/p&gt;

&lt;p&gt;The health tracker is a rolling window of the last 10 job outcomes per operator (&lt;code&gt;HEALTH_WINDOW_SIZE = 10&lt;/code&gt;). From that window, it computes a success rate. If the rate drops below 30% (&lt;code&gt;MIN_SUCCESS_RATE = 0.3&lt;/code&gt;), the operator is marked unhealthy and stops receiving new work.&lt;/p&gt;

&lt;p&gt;A 30% floor sounds lenient, and it is, deliberately. The system tolerates occasional failures (network blips, transient resource pressure) without immediately pulling an operator from rotation. But the companion mechanism is less forgiving: after any failure, the operator enters a 60-second cooldown (&lt;code&gt;FAILURE_COOLDOWN = 60s&lt;/code&gt;) during which it won't receive jobs regardless of its overall success rate.&lt;/p&gt;

&lt;p&gt;The practical effect: a single failure costs you at least a minute of downtime. Three failures in quick succession won't necessarily drop you below the 30% floor (depending on your recent history), but the cooldowns stack. If you're failing every other job, you're spending half your time in cooldown even though your 50% success rate is above the threshold.&lt;/p&gt;

&lt;h3&gt;
  
  
  How selection works
&lt;/h3&gt;

&lt;p&gt;When the orchestrator needs to assign a job, it sorts available operators using three criteria in priority order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Health status&lt;/strong&gt;: healthy operators before unhealthy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Available capacity&lt;/strong&gt;: operators with more open slots first&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Success rate&lt;/strong&gt;: higher rates preferred as a tiebreaker&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Available slots are computed as &lt;code&gt;maxCapacity - activeLoad&lt;/code&gt;. If your &lt;code&gt;activeLoad&lt;/code&gt; equals your &lt;code&gt;maxCapacity&lt;/code&gt;, you won't be selected regardless of health. This is the capacity utilization number, and it's the one most operators forget to monitor.&lt;/p&gt;

&lt;p&gt;The health summary for each operator looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;OperatorHealthSummary&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`0x&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;successRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;healthy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;activeLoad&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;maxCapacity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;availableSlots&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;onCooldown&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;New operators with fewer than 3 recorded outcomes are assumed healthy. This grace period lasts until your third job completes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quote lifetimes: the tuning surface nobody talks about
&lt;/h2&gt;

&lt;p&gt;When a user requests a job, the operator's pricing engine generates a quote. That quote has a lifetime, and getting the lifetime right matters more than most operators realize.&lt;/p&gt;

&lt;p&gt;Quote validity duration is the number of seconds a price quote remains acceptable to the protocol after generation. The default is 5 minutes (&lt;code&gt;quote_validity_duration_secs: 300&lt;/code&gt; in &lt;code&gt;operator.toml&lt;/code&gt;). The protocol enforces a hard cap of 1 hour (&lt;code&gt;MAX_QUOTE_AGE = 1 hours&lt;/code&gt; in &lt;a href="https://github.com/tangle-network/tnt-core/blob/main/src/config/ProtocolConfig.sol" rel="noopener noreferrer"&gt;&lt;code&gt;ProtocolConfig.sol&lt;/code&gt;&lt;/a&gt;). Anything between those bounds is your call.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why shorter isn't always better
&lt;/h3&gt;

&lt;p&gt;A short quote lifetime (say, 30 seconds) limits your price exposure. If the cost of the resources you're committing changes between quote generation and job execution, a shorter window reduces the risk that you're locked into a stale price. For operators dealing with volatile token pricing, this matters.&lt;/p&gt;

&lt;p&gt;But short lifetimes create user friction. The user needs to receive the quote, review it, sign the payment, and submit the request before the quote expires. On a congested network, that flow can easily take more than 30 seconds. An expired quote means a failed request, a retry, and a frustrated user. It also counts as a job failure in the health tracker's rolling window, which means short quote lifetimes can indirectly damage your health score.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why longer isn't always better
&lt;/h3&gt;

&lt;p&gt;A long quote lifetime (approaching the 1-hour cap) is comfortable for users but risky for operators. Resource costs can shift, token exchange rates can move, and you're committed to the quoted price for the entire window. If you're pricing in wei (the protocol's denomination-neutral unit for cross-chain, multi-token pricing), exchange rate drift compounds with any staleness in your benchmark data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical guidance
&lt;/h3&gt;

&lt;p&gt;For most operators, the 5-minute default works well for stablecoin-denominated services where price volatility is low. Consider adjusting in these scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Volatile token pricing&lt;/strong&gt;: Drop to 60-120 seconds. Accept the increased retry rate as a cost of price accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long user flows&lt;/strong&gt;: If your users are interacting through UIs with multiple confirmation steps, extend to 10-15 minutes. Monitor your expired-quote rate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-value jobs&lt;/strong&gt;: Shorter is safer. A 1-hour quote on a job that costs several hundred dollars in compute creates real exposure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep your quote server address current on-chain via &lt;code&gt;updateOperatorPreferences&lt;/code&gt; on the &lt;code&gt;ITangleOperators&lt;/code&gt; interface. A stale address means quotes can't be fetched at all, which is worse than any lifetime misconfiguration.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pricing engine under the hood
&lt;/h2&gt;

&lt;p&gt;Quotes aren't arbitrary numbers. The pricing engine runs automated benchmarks on your hardware when a service activates (&lt;code&gt;ServiceActivated&lt;/code&gt; event), measuring CPU, memory, storage, network, and GPU performance. Results are cached locally by blueprint ID.&lt;/p&gt;

&lt;p&gt;The quote formula is: &lt;code&gt;Base Resource Cost x Time Multiplier x Security Commitment Factor&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Resource pricing is configured per-blueprint in your &lt;code&gt;operator.toml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[blueprint.resources]&lt;/span&gt;
&lt;span class="py"&gt;cpu&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;price_per_unit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.001"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="py"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;16384&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;price_per_unit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.00005"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="py"&gt;storage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1024000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;price_per_unit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.00002"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pricing engine's full config controls benchmark behavior and the quote server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;database_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"./data/price_cache"&lt;/span&gt;
&lt;span class="py"&gt;benchmark_duration&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;
&lt;span class="py"&gt;benchmark_interval&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="py"&gt;keystore_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"./data/keystore"&lt;/span&gt;
&lt;span class="py"&gt;rpc_bind_address&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"127.0.0.1"&lt;/span&gt;
&lt;span class="py"&gt;rpc_port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;9000&lt;/span&gt;
&lt;span class="py"&gt;rpc_timeout&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;
&lt;span class="py"&gt;rpc_max_connections&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
&lt;span class="py"&gt;quote_validity_duration_secs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;rpc_max_connections&lt;/code&gt; is too low for your traffic, quote requests will queue and potentially time out, which looks identical to an offline quote server from the user's perspective. For operators expecting high request volume, bumping this above the default 100 is worth doing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The degradation cascade: signals before slashing
&lt;/h2&gt;

&lt;p&gt;Operators don't get slashed out of nowhere. There's a predictable four-stage cascade, and every stage produces signals you can catch if you're watching.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbxuln73ripq7iqhdyj1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbxuln73ripq7iqhdyj1.png" alt="Four-stage degradation cascade: heartbeat drift, degraded, offline, slashing risk" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 1: Heartbeat drift.&lt;/strong&gt; Your heartbeat interval starts slipping. Maybe a network issue, maybe resource contention on the node. The &lt;code&gt;consecutiveBeats&lt;/code&gt; counter resets and &lt;code&gt;missedBeats&lt;/code&gt; starts climbing. On-chain, your status is still Healthy, but the trend is visible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 2: Degraded status.&lt;/strong&gt; Once &lt;code&gt;missedBeats&lt;/code&gt; crosses a threshold (per your service's &lt;code&gt;HeartbeatConfig.maxMissed&lt;/code&gt;), the status shifts to Degraded (codes 1-99). You're still selectable for jobs, but the orchestrator's health tracker may start reflecting failures if the underlying issue is also affecting job execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 3: Offline.&lt;/strong&gt; Continued misses push the status to Offline (codes 100+). &lt;code&gt;isOnline()&lt;/code&gt; returns false. The orchestrator stops sending you work entirely. You're still staked, still committed, but earning nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 4: Slashing risk.&lt;/strong&gt; If the offline period extends beyond the grace window, slashing becomes possible. The dispute window is 14 rounds (3.5 days, since each round is 6 hours). An exit takes 56 rounds (14 days). These are not fast processes, which is intentional: they give operators time to recover from legitimate outages.&lt;/p&gt;

&lt;p&gt;Every stage before slashing is recoverable. Fix the underlying issue, submit a successful heartbeat, and the cascade resets. The operators who get slashed are the ones who aren't watching.&lt;/p&gt;

&lt;h3&gt;
  
  
  What metric violations actually do
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;MetricDefinition&lt;/code&gt; system lets services define bounds (&lt;code&gt;minValue&lt;/code&gt;, &lt;code&gt;maxValue&lt;/code&gt;) for custom metrics like CPU utilization or memory usage. When a submitted metric falls outside those bounds, the violation is logged on-chain. Currently, violations don't trigger automatic slashing. They create an on-chain record that can be used in governance-driven disputes, but the enforcement path is manual. This may change as the protocol matures, so treat metric bounds as soft limits today and hard limits tomorrow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Revenue: what you're protecting
&lt;/h2&gt;

&lt;p&gt;The default fee split for job revenue is:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Recipient&lt;/th&gt;
&lt;th&gt;Share&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Operators&lt;/td&gt;
&lt;td&gt;40%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Developers&lt;/td&gt;
&lt;td&gt;20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Protocol&lt;/td&gt;
&lt;td&gt;20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stakers&lt;/td&gt;
&lt;td&gt;20%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These percentages are governance-configurable. On top of job revenue, operators can earn TNT incentives from the &lt;code&gt;InflationPool&lt;/code&gt; and commission from delegator &lt;code&gt;RewardVaults&lt;/code&gt;. But all of these revenue streams depend on one thing: staying in rotation. An operator that's offline, in cooldown, or at capacity earns nothing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building your monitoring stack
&lt;/h2&gt;

&lt;p&gt;The QoS endpoints on your node give you the raw signals:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Quick health check&lt;/span&gt;
curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://localhost:9090/health

&lt;span class="c"&gt;# Prometheus metrics (for Grafana dashboards)&lt;/span&gt;
curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://localhost:9090/metrics | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Prometheus alerting config catches problems at Stage 1 of the degradation cascade, before they affect job selection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;blueprint_operator&lt;/span&gt;
    &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HeartbeatDrift&lt;/span&gt;
        &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;increase(operator_missed_beats_total[30m]) &amp;gt; &lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;
        &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
        &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
        &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Operator&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;missed&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;2&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;heartbeats&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;30m"&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Heartbeat&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;drift&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;detected.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Investigate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;before&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;degrades."&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CapacityExhausted&lt;/span&gt;
        &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(operator_active_load / operator_max_capacity) &amp;gt; &lt;/span&gt;&lt;span class="m"&gt;0.9&lt;/span&gt;
        &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10m&lt;/span&gt;
        &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
        &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Operator&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;at&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;90%&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;capacity&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;10m"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SuccessRateDropping&lt;/span&gt;
        &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;operator_job_success_rate &amp;lt; &lt;/span&gt;&lt;span class="m"&gt;0.5&lt;/span&gt;
        &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
        &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;critical&lt;/span&gt;
        &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Operator&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;success&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;rate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;below&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;50%"&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Below&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;30%&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;triggers&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;removal&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;rotation."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The metrics you should alert on, mapped to the three numbers that matter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heartbeat health:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;consecutiveBeats&lt;/code&gt; trending downward&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;missedBeats&lt;/code&gt; incrementing&lt;/li&gt;
&lt;li&gt;Status code changing from 0 to anything else&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Quote freshness:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quote generation latency approaching &lt;code&gt;quote_validity_duration_secs&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Expired-quote rate (track client-side 402 retries if possible)&lt;/li&gt;
&lt;li&gt;Benchmark cache age (stale benchmarks produce stale prices)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Capacity utilization:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;activeLoad / maxCapacity&lt;/code&gt; ratio approaching 1.0&lt;/li&gt;
&lt;li&gt;Available slots dropping to zero during peak hours&lt;/li&gt;
&lt;li&gt;Job queue depth if your blueprint supports queuing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The protocol's timing constants give you the boundaries for alert thresholds:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Constant&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;What it means for alerting&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ROUND_DURATION_SECONDS&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;21,600 (6 hr)&lt;/td&gt;
&lt;td&gt;Rounds are the unit of protocol time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ROUNDS_PER_EPOCH&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;28 (7 days)&lt;/td&gt;
&lt;td&gt;Epoch boundaries trigger reward distribution&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DISPUTE_WINDOW_ROUNDS&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;14 (3.5 days)&lt;/td&gt;
&lt;td&gt;Time to respond to slashing disputes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;OPERATOR_DELAY_ROUNDS&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;56 (14 days)&lt;/td&gt;
&lt;td&gt;Minimum exit timeline&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;MAX_QUOTE_AGE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1 hour&lt;/td&gt;
&lt;td&gt;Absolute ceiling for quote validity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;MIN_SERVICE_TTL&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1 hour&lt;/td&gt;
&lt;td&gt;Shortest allowed service commitment&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How many heartbeats can I miss before getting slashed?
&lt;/h3&gt;

&lt;p&gt;There is no single fixed number. Slashing risk comes from a cascade, not a threshold: missed beats push you to Degraded, then Offline, then into a slashing-eligible window with a 14-round (3.5-day) dispute period. Your service's &lt;code&gt;HeartbeatConfig.maxMissed&lt;/code&gt; setting controls when the Degraded transition happens, and that value varies by blueprint. The practical answer: set up alerts on &lt;code&gt;missedBeats&lt;/code&gt; incrementing (Stage 1 of the cascade) and you'll catch drift long before slashing is on the table.&lt;/p&gt;

&lt;h3&gt;
  
  
  Should I set my quote lifetime to the maximum 1 hour?
&lt;/h3&gt;

&lt;p&gt;Almost certainly not. The 1-hour &lt;code&gt;MAX_QUOTE_AGE&lt;/code&gt; is a protocol ceiling, not a recommendation. A 1-hour quote locks you into pricing that may not reflect current resource costs or token exchange rates. The 5-minute default is a reasonable starting point. Only extend it if your users consistently need more time to complete the quote-to-submission flow, and even then, 10-15 minutes is usually sufficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  What happens if my node is healthy but at full capacity?
&lt;/h3&gt;

&lt;p&gt;You stop receiving new jobs. The orchestrator's selection algorithm checks &lt;code&gt;availableSlots&lt;/code&gt; (&lt;code&gt;maxCapacity -&lt;/code&gt;activeLoad&lt;code&gt;) and skips operators with zero slots. You won't be marked unhealthy, but you'll earn nothing until capacity frees up. If this happens regularly during peak hours, either increase&lt;/code&gt;maxCapacity` (if your hardware supports it) or run additional operator instances.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do metric violations lead to automatic slashing?
&lt;/h3&gt;

&lt;p&gt;Not currently. When a submitted metric falls outside the bounds defined in &lt;code&gt;MetricDefinition&lt;/code&gt;, the violation is logged on-chain but enforcement is governance-driven, not automatic. That said, the on-chain record exists and could be used against you in a dispute. Treat metric bounds as constraints you should respect, because the enforcement mechanism will likely tighten over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's the minimum success rate to stay in rotation?
&lt;/h3&gt;

&lt;p&gt;30%, calculated over a rolling window of your last 10 jobs. This sounds generous, but the 60-second cooldown after each failure is the real constraint. If you're failing frequently, the cooldowns accumulate and you spend significant time unable to receive work, even if your success rate stays above the floor. An operator failing every third job has a 67% success rate (well above the floor) but loses at least 20 seconds of every minute to cooldowns.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Build with Tangle&lt;/strong&gt; | &lt;a href="https://tangle.tools" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://github.com/tangle-network" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://discord.gg/tangle" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; | &lt;a href="https://t.me/tanglenet" rel="noopener noreferrer"&gt;Telegram&lt;/a&gt; | &lt;a href="https://x.com/taborgroup" rel="noopener noreferrer"&gt;X/Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>infrastructure</category>
      <category>monitoring</category>
      <category>web3</category>
    </item>
    <item>
      <title>On-Chain RFQ for Compute: How Job Quotes, Verification, and Slashing Work</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Wed, 25 Mar 2026 03:51:07 +0000</pubDate>
      <link>https://dev.to/tangle_network/on-chain-rfq-for-compute-how-job-quotes-verification-and-slashing-work-50a0</link>
      <guid>https://dev.to/tangle_network/on-chain-rfq-for-compute-how-job-quotes-verification-and-slashing-work-50a0</guid>
      <description>&lt;p&gt;&lt;strong&gt;Build with Tangle&lt;/strong&gt; | &lt;a href="https://tangle.tools" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://github.com/tangle-network" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://discord.gg/tangle" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; | &lt;a href="https://t.me/tanglenet" rel="noopener noreferrer"&gt;Telegram&lt;/a&gt; | &lt;a href="https://x.com/taborgroup" rel="noopener noreferrer"&gt;X/Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>blockchain</category>
      <category>webdev</category>
      <category>decentralization</category>
      <category>verification</category>
    </item>
    <item>
      <title>Subscription vs Pay-Per-Request API Pricing: Tradeoffs, Implementation, and When to Use Each</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Tue, 24 Mar 2026 18:50:12 +0000</pubDate>
      <link>https://dev.to/tangle_network/subscription-vs-pay-per-request-api-pricing-tradeoffs-implementation-and-when-to-use-each-g3g</link>
      <guid>https://dev.to/tangle_network/subscription-vs-pay-per-request-api-pricing-tradeoffs-implementation-and-when-to-use-each-g3g</guid>
      <description>&lt;p&gt;Pay-per-request API pricing charges consumers per call with no accounts or billing infrastructure. It fits sporadic usage, machine-to-machine calls, and operators who want zero billing ops. Subscription pricing charges a flat recurring fee, enabling customer relationships, volume discounts, feature gating, and predictable revenue, but it requires payment processing integration and account management. The right choice depends on call frequency (subscriptions win for high-volume, predictable usage), consumer type (organizations prefer subscriptions with invoices; autonomous agents prefer per-request), and how much billing infrastructure you're willing to maintain. Some architectures support both simultaneously, routing pay-per-request and subscription clients through the same service handlers via a unified job dispatch layer.&lt;/p&gt;

&lt;p&gt;This isn't a new problem. Twilio built a $4B business on pay-per-request (fractions of a cent per SMS). Slack charges per seat per month. AWS meters to the millisecond. According to OpenView's 2023 SaaS Benchmarks report, roughly 45% of SaaS companies now offer some form of usage-based pricing, up from 34% in 2020, and companies with usage-based models report 120% net dollar retention compared to 110% for pure subscription. The trend is clear: the industry is moving toward usage-based models, but subscriptions aren't going anywhere.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgx6681qu0a60cxct3y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgx6681qu0a60cxct3y1.png" alt="45% of SaaS companies now offer usage-based pricing" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What makes this question interesting right now is that blockchain settlement protocols like Coinbase and Cloudflare's &lt;a href="https://x402.org" rel="noopener noreferrer"&gt;x402&lt;/a&gt; have made pay-per-request radically cheaper to implement. Where you once needed Stripe, an accounts database, and a credits ledger, you can now accept payment with a config file. Blueprint SDK's pricing engine takes this further by defining both models at the protocol level, letting operators configure their billing model without changing application code. This post uses Blueprint's implementation as a concrete reference, but the tradeoffs apply to any API service.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use Which
&lt;/h2&gt;

&lt;p&gt;Before diving into implementation, here's the decision framework. Everything that follows supports this table.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Pay-Per-Request&lt;/th&gt;
&lt;th&gt;Subscription&lt;/th&gt;
&lt;th&gt;Resource-Based&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Billing infrastructure&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Near zero (config files only)&lt;/td&gt;
&lt;td&gt;Significant (Stripe, accounts, webhooks)&lt;/td&gt;
&lt;td&gt;Moderate (benchmarking + metering)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Latency overhead&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1-3 sec per call (on-chain settlement)&lt;/td&gt;
&lt;td&gt;None per call (pre-authorized)&lt;/td&gt;
&lt;td&gt;None per call (pre-authorized)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Customer visibility&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Anonymous wallets&lt;/td&gt;
&lt;td&gt;Full identity, usage analytics&lt;/td&gt;
&lt;td&gt;Full identity, usage analytics&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Revenue predictability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Variable, follows demand&lt;/td&gt;
&lt;td&gt;Recurring, plannable&lt;/td&gt;
&lt;td&gt;Variable, follows demand&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ideal call pattern&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Sporadic, high-value calls&lt;/td&gt;
&lt;td&gt;Frequent, predictable volume&lt;/td&gt;
&lt;td&gt;Long-running compute jobs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Consumer type&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Machines, agents, anonymous users&lt;/td&gt;
&lt;td&gt;Organizations, teams&lt;/td&gt;
&lt;td&gt;Infrastructure buyers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Volume discounts&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not natively possible&lt;/td&gt;
&lt;td&gt;Natural (tiered plans, credit packs)&lt;/td&gt;
&lt;td&gt;Natural (bulk rates)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ops cost to operator&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Minimal&lt;/td&gt;
&lt;td&gt;High (billing UI, support, churn mgmt)&lt;/td&gt;
&lt;td&gt;Moderate (benchmarking maintenance)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Pay-per-request fits when:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Calls are infrequent or unpredictable.&lt;/strong&gt; If users call your service once a week or once a month, a subscription is a hard sell. Per-request pricing lets them pay exactly for what they use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want zero billing ops.&lt;/strong&gt; No Stripe account, no accounts to manage, no invoices to send, no chargebacks to dispute.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Your consumers are machines, not people.&lt;/strong&gt; AI agents making API calls don't need a billing dashboard. They need a 402 response they can programmatically respond to.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The value per call is high enough to justify the latency.&lt;/strong&gt; That 1-3 second settlement overhead is trivial for a $3 AI inference call. It's a dealbreaker for a $0.0001 data lookup.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Subscription fits when:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Calls are frequent and predictable.&lt;/strong&gt; A service handling thousands of requests per day is a natural subscription product. The consumer wants cost certainty; the operator wants revenue predictability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You need customer relationships.&lt;/strong&gt; Tier-based feature gating, usage analytics, churn prevention, upselling: these require knowing who your customers are.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want to capture value above cost.&lt;/strong&gt; Subscriptions let you price on value delivered, not compute consumed. A $99/month plan for a service that saves hours of manual work is good economics regardless of server cost. Bessemer's 2024 Cloud Index shows that the highest-margin SaaS companies price on value, not cost, with top-quartile gross margins above 80%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Your consumers are organizations.&lt;/strong&gt; Companies want invoices, contracts, and SLAs. A "pay with your wallet per request" model doesn't fit procurement processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Resource-based pricing (charging by actual compute consumed) fits a third niche: long-running GPU jobs, storage provisioning, infrastructure rentals. The price derives from hardware benchmarks, not a business decision. It's honest math, well-suited for commodity compute where transparency builds trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pay-Per-Request: x402 and the Billing Stack Collapse
&lt;/h2&gt;

&lt;p&gt;HTTP status code 402 has been "reserved for future use" since 1999. Twenty-seven years later, Coinbase and Cloudflare's &lt;a href="https://x402.org" rel="noopener noreferrer"&gt;x402 protocol&lt;/a&gt; gave it a real job. A client hits an endpoint, gets back a 402 response with pricing information, signs a stablecoin payment, and resends the request with an &lt;code&gt;X-PAYMENT&lt;/code&gt; header containing proof of settlement. No API keys, no billing dashboard, no invoices.&lt;/p&gt;

&lt;p&gt;What makes this interesting for service operators is what it eliminates. A traditional paid API requires an account system, API key management, a credit balance ledger, reconciliation logic, dispute resolution, and fraud detection. With x402, the blockchain is the billing system. The wallet is the API key. The transaction receipt is the invoice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69pk7ohj428zfat2uftn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69pk7ohj428zfat2uftn.png" alt="Subscription vs pay-per-request payment flow architecture" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation
&lt;/h3&gt;

&lt;p&gt;The x402 gateway integrates into a Blueprint as a middleware layer. Two TOML files and a few lines of Rust:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;X402Config&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_toml&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"x402.toml"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;pricing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load_job_pricing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;read_to_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"job_pricing.toml"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;oracle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;CachedRateProvider&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;CoinbaseOracle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nn"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_secs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="nf"&gt;refresh_rates&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;oracle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"ETH"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x402_producer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;X402Gateway&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pricing&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nn"&gt;BlueprintRunner&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;((),&lt;/span&gt; &lt;span class="nn"&gt;BlueprintEnvironment&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="nf"&gt;.router&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;router&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="nf"&gt;.producer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x402_producer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.background_service&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Job prices are set in &lt;a href="https://ethereum.org/en/developers/docs/intro-to-ether/#denominations-of-ether" rel="noopener noreferrer"&gt;wei&lt;/a&gt; per job type. Why wei? Because operators can accept multiple tokens across different chains, and wei provides a denomination-neutral base unit. At request time, the gateway runs a deterministic conversion:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F430gtyceefncijo4a0nn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F430gtyceefncijo4a0nn.png" alt="x402 price conversion formula" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Concretely: a 0.001 ETH job at an ETH price of $3,200 with a 200 basis point operator markup becomes &lt;code&gt;0.001 * 3200 * 1.02 = 3.264 USDC&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Settlement-First Design
&lt;/h3&gt;

&lt;p&gt;The operator gets paid before doing any work. The &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/x402/src/gateway.rs" rel="noopener noreferrer"&gt;&lt;code&gt;X402Gateway&lt;/code&gt;&lt;/a&gt; calls &lt;code&gt;.settle_before_execution()&lt;/code&gt;, confirming the stablecoin transfer on-chain before the job handler fires. This adds 1-3 seconds of latency on Base for the facilitator round-trip, but it means the operator never does unpaid work.&lt;/p&gt;

&lt;p&gt;Compare this to traditional API billing, where you bill after the fact and hope the credit card doesn't bounce. The tradeoff is latency for certainty.&lt;/p&gt;

&lt;h3&gt;
  
  
  Access Control
&lt;/h3&gt;

&lt;p&gt;Three access modes, configured in &lt;code&gt;x402.toml&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Disabled&lt;/strong&gt; (default): no payment gateway active.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PublicPaid&lt;/strong&gt;: anyone who sends valid payment can call the service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RestrictedPaid&lt;/strong&gt;: payment required, plus an on-chain &lt;code&gt;isPermittedCaller&lt;/code&gt; check via &lt;code&gt;eth_call&lt;/code&gt;. Paid service with an allowlist.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What You Give Up
&lt;/h3&gt;

&lt;p&gt;No customer relationships. No usage analytics per customer. No volume discounts. No feature gating. No trials. Every request is an anonymous economic transaction. If a customer calls your service 10,000 times a month, you have no way to offer them a better rate and no way to identify them.&lt;/p&gt;

&lt;p&gt;Exchange rates are static, operator-configured values. This removes runtime dependency on Chainlink or DEX price feeds, but operators need to manage their own rate refresh. The config uses &lt;code&gt;Arc&amp;lt;Mutex&amp;lt;JobPricingConfig&amp;gt;&amp;gt;&lt;/code&gt; for runtime updates without restart, but the responsibility is yours.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/x402/src/quote_registry.rs" rel="noopener noreferrer"&gt;quote registry&lt;/a&gt; is in-memory with a 300-second TTL. Quotes are lost on restart. For high-availability deployments, plan accordingly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Subscription: Predictable Revenue, More Moving Parts
&lt;/h2&gt;

&lt;p&gt;Blueprint's pricing engine defines subscription as a first-class model via a &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/pricing-engine/src/pricing.rs" rel="noopener noreferrer"&gt;&lt;code&gt;PricingModelHint&lt;/code&gt;&lt;/a&gt; enum:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight protobuf"&gt;&lt;code&gt;&lt;span class="na"&gt;PAY_ONCE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;        &lt;span class="c1"&gt;// resource x rate x TTL&lt;/span&gt;
&lt;span class="na"&gt;SUBSCRIPTION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;    &lt;span class="c1"&gt;// flat rate per billing interval&lt;/span&gt;
&lt;span class="na"&gt;EVENT_DRIVEN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;    &lt;span class="c1"&gt;// flat rate per event&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Subscription pricing is configured per service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[default]&lt;/span&gt;
&lt;span class="py"&gt;pricing_model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"subscription"&lt;/span&gt;
&lt;span class="py"&gt;subscription_rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.001&lt;/span&gt;
&lt;span class="py"&gt;subscription_interval&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt;    &lt;span class="c"&gt;# daily (seconds)&lt;/span&gt;
&lt;span class="py"&gt;event_rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0001&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Price calculation is simple: &lt;code&gt;subscription_rate * security_factor&lt;/code&gt;. No resource benchmarking, no TTL math. Both subscription and event-driven pricing produce the same output as pay-per-request: a signed &lt;a href="https://eips.ethereum.org/EIPS/eip-712" rel="noopener noreferrer"&gt;EIP-712&lt;/a&gt; quote with price, timestamp, and expiry. This uniformity matters because downstream job dispatch doesn't care which model generated the quote.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;EVENT_DRIVEN&lt;/code&gt; variant sits between subscription and pure pay-per-request. It charges a flat rate per event without x402's full HTTP 402 handshake and stablecoin settlement. Think of it as "pay-per-call without the on-chain overhead."&lt;/p&gt;

&lt;h3&gt;
  
  
  What Subscriptions Enable in Practice
&lt;/h3&gt;

&lt;p&gt;The partner billing system in &lt;a href="https://github.com/tangle-network/blueprint-agent" rel="noopener noreferrer"&gt;blueprint-agent&lt;/a&gt; shows what a production subscription looks like. Three tiers: Starter ($99/month, 500k credits), Growth ($349/month, 2M credits), Enterprise ($999/month, 10M credits). Credits convert at roughly 10,000 per dollar. Overages are calculated as &lt;code&gt;excessCredits * monthlyPriceCents / includedCredits&lt;/code&gt; and charged via Stripe.&lt;/p&gt;

&lt;p&gt;This model enables things pay-per-request structurally cannot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Feature gating per tier&lt;/strong&gt;: quest limits, leaderboards, verification types, custom domains, SLA guarantees.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volume discounts&lt;/strong&gt;: credit packs at decreasing per-unit cost (100k for $10, 5M for $350).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer relationships&lt;/strong&gt;: you know who your users are, what they use, and when they churn.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predictable revenue&lt;/strong&gt;: monthly recurring revenue is easier to plan around than variable per-request income.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The cost is infrastructure. Stripe integration, webhook handlers for &lt;code&gt;invoice.payment_succeeded&lt;/code&gt;, idempotency checks, credit pool management, and a billing UI. That's real engineering investment, and for small operators it can outweigh the pricing model's benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architectural Trick: Running Both at Once
&lt;/h2&gt;

&lt;p&gt;Here's the design insight that makes this more than a binary choice. The x402 gateway converts incoming paid HTTP requests into &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/runner/src/lib.rs" rel="noopener noreferrer"&gt;&lt;code&gt;JobCall&lt;/code&gt;&lt;/a&gt; values, the same type that every other trigger source produces: on-chain events, cron schedules, webhooks. Job handlers don't know how they were triggered.&lt;/p&gt;

&lt;p&gt;This means a single service can accept payments through multiple models simultaneously. An &lt;code&gt;X402Producer&lt;/code&gt; handles pay-per-request clients. A subscription verification layer handles subscribers. Both feed the same router, the same job handlers, the same execution path.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x402 client ------&amp;gt; X402Producer ----\
                                      +---&amp;gt; Router ---&amp;gt; Job Handlers
subscription -----&amp;gt; SubProducer -----/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;PricingModelHint&lt;/code&gt; enum already supports this at the protocol level. The economic model becomes a configuration choice, not an architectural fork. This is the same pattern Twilio uses: programmatic callers pay per API call, while enterprise customers negotiate volume contracts, but the underlying telephony infrastructure doesn't care which billing model triggered the call.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Economics, Spelled Out
&lt;/h2&gt;

&lt;p&gt;Consider a Blueprint running AI inference at $0.002 compute cost per call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pay-per-request at $0.01/call&lt;/strong&gt;: At 10,000 calls/month, that's $100 revenue, $20 cost, $80 margin. At 100 calls/month, it's $1 revenue. The operator earns proportionally but has no revenue floor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subscription at $99/month (500k credits)&lt;/strong&gt;: At 10,000 calls/month, the consumer uses 2% of their allocation. The operator gets $99 regardless. At 100,000 calls/month, the consumer still pays $99 and the operator's margin compresses.&lt;/p&gt;

&lt;p&gt;The math clarifies the tradeoff. Pay-per-request aligns revenue perfectly with usage but provides no predictability. Subscription provides a revenue floor but creates margin risk on heavy consumers. The right answer depends on your usage distribution: if most consumers cluster around a predictable volume, subscription wins. If usage is highly variable or long-tail, per-request is safer for the operator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Roadmap: What's Not Shipped Yet
&lt;/h2&gt;

&lt;p&gt;The pricing engine's subscription model has config, calculation logic, and signed quote output. But there's no on-chain enforcement equivalent to the x402 gateway for subscriptions today. The partner billing system handles subscription enforcement through Stripe (off-chain). An on-chain subscription verification layer is the gap for fully trustless subscription services.&lt;/p&gt;

&lt;p&gt;The x402 facilitator at &lt;code&gt;facilitator.x402.org&lt;/code&gt; is currently centralized. A decentralized facilitator Blueprint is spec'd but not shipped. For now, settlement routes through a single facilitator service.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Can I run both subscription and pay-per-request on the same service?
&lt;/h3&gt;

&lt;p&gt;Yes. Both models produce the same &lt;code&gt;JobCall&lt;/code&gt; type through different producers feeding the same router. The &lt;code&gt;PricingModelHint&lt;/code&gt; enum defines both at the protocol level. You'd wire an &lt;code&gt;X402Producer&lt;/code&gt; for pay-per-request alongside a subscription verification producer, both routing to identical job handlers.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I handle exchange rate drift with x402?
&lt;/h3&gt;

&lt;p&gt;Rates are static in your config, not oracle-fed. The &lt;code&gt;Arc&amp;lt;Mutex&amp;lt;JobPricingConfig&amp;gt;&amp;gt;&lt;/code&gt; allows runtime updates without restart, so you can run a background task that periodically refreshes from a price feed. The &lt;code&gt;CachedRateProvider&lt;/code&gt; wrapping a &lt;code&gt;CoinbaseOracle&lt;/code&gt; with a 60-second cache is one pattern in the SDK examples. Rate freshness is the operator's responsibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  What happens to x402 quotes if my service restarts?
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;QuoteRegistry&lt;/code&gt; is in-memory. On restart, all outstanding quotes are lost. The default 300-second TTL limits exposure. For high-availability deployments, you'd want very short TTLs or a persistent quote store, though neither is provided out of the box.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does the per-event model differ from x402 pay-per-request?
&lt;/h3&gt;

&lt;p&gt;Both are usage-based, but they operate at different layers. x402 runs a full HTTP 402 handshake with on-chain stablecoin settlement per request. The event-driven model (&lt;code&gt;EVENT_DRIVEN&lt;/code&gt; in the pricing engine) charges a flat rate per invocation through the pricing engine's quote system, without the HTTP 402 flow or settlement overhead. Event-driven is lighter-weight but relies on whatever payment enforcement layer is wired upstream.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's the minimum setup to test x402 payments?
&lt;/h3&gt;

&lt;p&gt;Two TOML files: &lt;code&gt;job_pricing.toml&lt;/code&gt; with per-job prices in wei, and &lt;code&gt;x402.toml&lt;/code&gt; with gateway config and accepted tokens. Wire them into &lt;code&gt;BlueprintRunner&lt;/code&gt; with &lt;code&gt;X402Gateway&lt;/code&gt; and &lt;code&gt;x402_producer&lt;/code&gt;. The &lt;code&gt;/x402/stats&lt;/code&gt; endpoint gives you counters for accepted, rejected, and enqueued payments. Test against Base testnet with test USDC.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Build with Tangle&lt;/strong&gt; | &lt;a href="https://tangle.tools" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://github.com/tangle-network" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://discord.gg/tangle" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; | &lt;a href="https://t.me/tanglenet" rel="noopener noreferrer"&gt;Telegram&lt;/a&gt; | &lt;a href="https://x.com/taborgroup" rel="noopener noreferrer"&gt;X/Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>apidevelopment</category>
      <category>blockchain</category>
      <category>saas</category>
      <category>monetization</category>
    </item>
    <item>
      <title>Pricing without hand-waving: wei pricing, token conversion, markup, and dynamic price tags</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Mon, 23 Mar 2026 22:31:11 +0000</pubDate>
      <link>https://dev.to/tangle_network/pricing-without-hand-waving-wei-pricing-token-conversion-markup-and-dynamic-price-tags-35a2</link>
      <guid>https://dev.to/tangle_network/pricing-without-hand-waving-wei-pricing-token-conversion-markup-and-dynamic-price-tags-35a2</guid>
      <description>&lt;h2&gt;
  
  
  The pricing problem nobody wants to solve
&lt;/h2&gt;

&lt;p&gt;Every API platform hits the same question eventually: how do you charge for compute? AWS solved it with a 200-page pricing calculator. Stripe solved it with a dashboard and a billing team. Most blockchain protocols punt entirely, letting operators pick a number and hope it covers costs.&lt;/p&gt;

&lt;p&gt;The hard version of this problem shows up when your operators are running heterogeneous workloads (CPU-bound inference, GPU rendering, long-running agent tasks) across different chains, settling in different tokens, with different exchange rates, and they need quotes that are cryptographically verifiable before a single cycle burns. Tangle's &lt;a href="https://github.com/tangle-network/blueprint/tree/main/crates/pricing-engine" rel="noopener noreferrer"&gt;pricing engine&lt;/a&gt; was built for exactly this. The core of it is a conversion pipeline: operators price jobs in wei (the smallest ETH unit), and at settlement time the engine divides by 10^18 to get ETH, multiplies by an exchange rate (e.g., 3,200 USDC/ETH), applies an operator-defined markup in basis points, then scales to the token's smallest unit and floors the result to an integer. A job priced at 0.001 ETH with a 3,200 USDC/ETH rate and 2% markup (200 bps) yields 3,264,000 USDC micro-units, or $3.264. Every resulting quote is EIP-712 signed with expiry and replay protection.&lt;/p&gt;

&lt;p&gt;That conversion layer sits on top of a dual-denomination pricing system (USD for resource provisioning, wei for per-job calls), three pricing models, hardware benchmarking, and a full anti-abuse stack. This post walks through each layer, what the config looks like, and where the sharp edges are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blockchain API pricing: the conversion formula
&lt;/h2&gt;

&lt;p&gt;Before getting into the pricing engine's architecture, here's the function that makes cross-token settlement work. The &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/x402/src/config.rs" rel="noopener noreferrer"&gt;&lt;code&gt;convert_wei_to_amount&lt;/code&gt;&lt;/a&gt; function is the bridge between blockchain-native pricing and stablecoin payments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;convert_wei_to_amount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;wei_price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;U256&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X402Error&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;wei_decimal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_str_exact&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;wei_price&lt;/span&gt;&lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;native_unit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10u64&lt;/span&gt;&lt;span class="nf"&gt;.pow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;native_amount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;wei_decimal&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;native_unit&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;token_amount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;native_amount&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.rate_per_native_unit&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;markup&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ONE&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nn"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.markup_bps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nn"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10_000u32&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;final_amount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;token_amount&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;markup&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;token_unit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10u64&lt;/span&gt;&lt;span class="nf"&gt;.pow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;u32&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.decimals&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;
    &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;final_amount&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;token_unit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.floor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Five steps, each doing one thing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Wei to ETH&lt;/strong&gt;: divide by 10^18&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ETH to token value&lt;/strong&gt;: multiply by the exchange rate (e.g., 3,200 USDC per ETH)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apply markup&lt;/strong&gt;: basis points divided by 10,000, added to 1.0 (200 bps = 1.02x)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scale to smallest unit&lt;/strong&gt;: multiply by 10^decimals (10^6 for USDC, 10^18 for DAI, 10^8 for WBTC)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Floor&lt;/strong&gt;: no fractional atomic units&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr39upmbdq455slpfjd0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr39upmbdq455slpfjd0h.png" alt="Wei to Token Conversion Pipeline" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The formula's output depends heavily on the token's decimal places. Here's the same 0.001 ETH job at a 3,200 rate with 200 bps markup across four tokens:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Token&lt;/th&gt;
&lt;th&gt;Decimals&lt;/th&gt;
&lt;th&gt;Smallest Unit&lt;/th&gt;
&lt;th&gt;Raw Conversion&lt;/th&gt;
&lt;th&gt;Final (with 2% markup)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;USDC&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;micro-dollar&lt;/td&gt;
&lt;td&gt;3,200,000&lt;/td&gt;
&lt;td&gt;3,264,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;USDT&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;micro-dollar&lt;/td&gt;
&lt;td&gt;3,200,000&lt;/td&gt;
&lt;td&gt;3,264,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DAI&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;td&gt;wei-equivalent&lt;/td&gt;
&lt;td&gt;3,200,000,000,000,000,000&lt;/td&gt;
&lt;td&gt;3,264,000,000,000,000,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WBTC&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;satoshi&lt;/td&gt;
&lt;td&gt;320,000,000&lt;/td&gt;
&lt;td&gt;326,400,000&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The USDC and USDT rows look identical because both use 6 decimals. DAI's 18 decimals produce enormous integers. WBTC's 8 decimals land in between. The floor operation matters most for low-decimal tokens on small transactions, where rounding could eat a meaningful fraction of the payment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1xqw2a37o1irnaxy1hx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1xqw2a37o1irnaxy1hx.png" alt="Wei Conversion Formula" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Two monetary worlds: USD and wei
&lt;/h2&gt;

&lt;p&gt;The pricing engine doesn't pick one denomination and force everything through it. Instead, it maintains two separate pricing paths that serve different purposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource-based pricing&lt;/strong&gt; works in USD with arbitrary decimal precision. Operators configure rates like "$0.001 per CPU core per second" in a TOML file, and the engine multiplies those rates against benchmarked hardware profiles and time-to-live values. This is the path for service provisioning, where the cost depends on what resources you're reserving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per-job pricing&lt;/strong&gt; works in wei, the smallest unit of ETH (1 ETH = 10^18 wei). Operators map each &lt;code&gt;(service_id, job_index)&lt;/code&gt; pair to a raw wei amount stored as a string (because U256 values overflow standard integer types). This is the path for individual job execution, where the cost is a flat fee per call.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="c"&gt;# Resource pricing: USD, decimal precision&lt;/span&gt;
&lt;span class="nn"&gt;[default]&lt;/span&gt;
&lt;span class="py"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="err"&gt;{&lt;/span&gt; &lt;span class="py"&gt;kind&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"CPU"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;price_per_unit_rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.001&lt;/span&gt; &lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="err"&gt;{&lt;/span&gt; &lt;span class="py"&gt;kind&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"MemoryMB"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;price_per_unit_rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0005&lt;/span&gt; &lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="err"&gt;{&lt;/span&gt; &lt;span class="py"&gt;kind&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"GPU"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;price_per_unit_rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.01&lt;/span&gt; &lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c"&gt;# Job pricing: wei, string-encoded U256&lt;/span&gt;
&lt;span class="nn"&gt;[1]&lt;/span&gt;
&lt;span class="py"&gt;0&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"1000000000000000"&lt;/span&gt;       &lt;span class="c"&gt;# Job 0: 0.001 ETH&lt;/span&gt;
&lt;span class="py"&gt;6&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"20000000000000000"&lt;/span&gt;      &lt;span class="c"&gt;# Job 6: 0.02 ETH&lt;/span&gt;
&lt;span class="py"&gt;7&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"250000000000000000"&lt;/span&gt;     &lt;span class="c"&gt;# Job 7: 0.25 ETH&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why two systems instead of one? Because the cost structures are fundamentally different. A service that provisions a container with 4 CPUs and a GPU for 30 minutes has costs that scale with resources and time. An LLM inference call has a roughly fixed cost per invocation regardless of how long the operator's machine has been running. Forcing both through the same formula would mean either over-abstracting the resource model or under-specifying the job model.&lt;/p&gt;

&lt;p&gt;The split also matches how operators think about pricing. Infrastructure costs map naturally to USD rates per resource unit. Per-call API pricing maps naturally to "this job costs X." Asking operators to mentally convert between these two frames adds friction without adding clarity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The resource pricing formula
&lt;/h2&gt;

&lt;p&gt;For resource-based pricing (the &lt;code&gt;PAY_ONCE&lt;/code&gt; model), the engine computes total cost as the sum of each resource's cost: &lt;code&gt;count × price_per_unit_rate × ttl_blocks × block_time × security_factor&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The implementation in &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/pricing-engine/src/pricing.rs" rel="noopener noreferrer"&gt;&lt;code&gt;pricing.rs&lt;/code&gt;&lt;/a&gt; breaks this into composable pieces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;calculate_resource_price&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;price_per_unit_rate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ttl_blocks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;security_requirements&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;AssetSecurityRequirements&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Decimal&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;adjusted_base_cost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;calculate_base_resource_cost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;price_per_unit_rate&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;adjusted_time_cost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;calculate_ttl_price_adjustment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ttl_blocks&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;security_factor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;calculate_security_rate_adjustment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;security_requirements&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;adjusted_base_cost&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;adjusted_time_cost&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;security_factor&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futggfpoz6k2zm4w5gtd8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futggfpoz6k2zm4w5gtd8.png" alt="Resource-Based Pricing Formula" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;block_time()&lt;/code&gt; is hardcoded at 6 seconds. &lt;code&gt;security_factor&lt;/code&gt; currently returns &lt;code&gt;Decimal::ONE&lt;/code&gt;, a placeholder for future adjustment based on asset security requirements. The hook exists in the formula so that when security-weighted pricing ships, it slots in without changing the calculation structure.&lt;/p&gt;

&lt;p&gt;The interesting part is where &lt;code&gt;count&lt;/code&gt; comes from. Rather than trusting operators to self-report their hardware, the engine &lt;strong&gt;benchmarks the actual machine&lt;/strong&gt; for CPU, memory, storage, GPU, and network. These benchmark profiles are cached per blueprint ID in RocksDB. The engine then multiplies the benchmarked resource usage against the configured per-unit rates. If you're coming from cloud pricing, think of it as the operator publishing a rate card while the engine measures the meter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ten resource types, six on-chain
&lt;/h3&gt;

&lt;p&gt;The engine defines &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/pricing-engine/src/types.rs" rel="noopener noreferrer"&gt;ten resource types&lt;/a&gt;: &lt;code&gt;CPU&lt;/code&gt;, &lt;code&gt;MemoryMB&lt;/code&gt;, &lt;code&gt;StorageMB&lt;/code&gt;, &lt;code&gt;NetworkEgressMB&lt;/code&gt;, &lt;code&gt;NetworkIngressMB&lt;/code&gt;, &lt;code&gt;GPU&lt;/code&gt;, &lt;code&gt;Request&lt;/code&gt;, &lt;code&gt;Invocation&lt;/code&gt;, &lt;code&gt;ExecutionTimeMS&lt;/code&gt;, and &lt;code&gt;StorageIOPS&lt;/code&gt;, plus a &lt;code&gt;Custom(String)&lt;/code&gt; escape hatch.&lt;/p&gt;

&lt;p&gt;Only the first six map to on-chain resource commitments in the &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/pricing-engine/src/signer.rs" rel="noopener noreferrer"&gt;signer&lt;/a&gt;. The rest (&lt;code&gt;Request&lt;/code&gt;, &lt;code&gt;Invocation&lt;/code&gt;, &lt;code&gt;ExecutionTimeMS&lt;/code&gt;, &lt;code&gt;StorageIOPS&lt;/code&gt;) are priced and included in the cost calculation, but they don't generate on-chain commitments. This distinction matters: on-chain commitments are what the protocol enforces. Off-chain resource types let operators factor in costs that are real but not worth the gas to commit individually.&lt;/p&gt;

&lt;h3&gt;
  
  
  Blueprint-specific overrides
&lt;/h3&gt;

&lt;p&gt;Pricing config supports per-blueprint overrides with a fallback chain. The engine looks up the &lt;code&gt;blueprint_id&lt;/code&gt; first; if no match exists, it falls back to &lt;code&gt;default&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[default]&lt;/span&gt;
&lt;span class="py"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="err"&gt;{&lt;/span&gt; &lt;span class="py"&gt;kind&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"CPU"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;price_per_unit_rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.001&lt;/span&gt; &lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="nn"&gt;[42]&lt;/span&gt;
&lt;span class="py"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="err"&gt;{&lt;/span&gt; &lt;span class="py"&gt;kind&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"CPU"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;price_per_unit_rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0015&lt;/span&gt; &lt;span class="err"&gt;}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Blueprint 42 gets premium CPU pricing. Everything else gets the default. This is a simple pattern, but it means operators can run dozens of blueprints with a single pricing config file, overriding only where the economics differ.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three pricing models
&lt;/h2&gt;

&lt;p&gt;The protobuf definition in &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/pricing-engine/proto/pricing.proto" rel="noopener noreferrer"&gt;&lt;code&gt;pricing.proto&lt;/code&gt;&lt;/a&gt; specifies three models:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PAY_ONCE (0)&lt;/strong&gt; is the resource-based model described above. It requires hardware benchmarks, computes cost from resource counts, rates, and TTL, and produces quotes for &lt;code&gt;createServiceFromQuotes()&lt;/code&gt; on-chain. This is the model for long-running service provisioning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SUBSCRIPTION (1)&lt;/strong&gt; is a flat rate per billing interval. No benchmarks needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[default]&lt;/span&gt;
&lt;span class="py"&gt;pricing_model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"subscription"&lt;/span&gt;
&lt;span class="py"&gt;subscription_rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.001&lt;/span&gt;        &lt;span class="c"&gt;# USD per interval&lt;/span&gt;
&lt;span class="py"&gt;subscription_interval&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt;    &lt;span class="c"&gt;# 1 day in seconds&lt;/span&gt;

&lt;span class="nn"&gt;[5]&lt;/span&gt;
&lt;span class="py"&gt;pricing_model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"subscription"&lt;/span&gt;
&lt;span class="py"&gt;subscription_rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.005&lt;/span&gt;
&lt;span class="py"&gt;subscription_interval&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;604800&lt;/span&gt;   &lt;span class="c"&gt;# 1 week&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;EVENT_DRIVEN (2)&lt;/strong&gt; is a flat rate per event or job invocation. Also no benchmarks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[default]&lt;/span&gt;
&lt;span class="py"&gt;pricing_model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"event_driven"&lt;/span&gt;
&lt;span class="py"&gt;event_rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0001&lt;/span&gt;              &lt;span class="c"&gt;# USD per event&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The subscription and event-driven models bypass the benchmark-and-resource calculation entirely. They exist because not every service has costs that correlate with hardware usage. A notification service that sends webhooks has a per-event cost structure. A monitoring service that runs continuously has a subscription cost structure. Forcing these through resource-based pricing would produce nonsensical numbers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accepted token configuration
&lt;/h2&gt;

&lt;p&gt;Each token an operator accepts gets its own config block with the exchange rate, markup, and settlement details:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[[accepted_tokens]]&lt;/span&gt;
&lt;span class="py"&gt;network&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"eip155:8453"&lt;/span&gt;
&lt;span class="py"&gt;asset&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"&lt;/span&gt;
&lt;span class="py"&gt;symbol&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"USDC"&lt;/span&gt;
&lt;span class="py"&gt;decimals&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;
&lt;span class="py"&gt;pay_to&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0xYourOperatorAddressOnBase"&lt;/span&gt;
&lt;span class="py"&gt;rate_per_native_unit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"3200.00"&lt;/span&gt;
&lt;span class="py"&gt;markup_bps&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
&lt;span class="py"&gt;transfer_method&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"eip3009"&lt;/span&gt;
&lt;span class="py"&gt;eip3009_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"USD Coin"&lt;/span&gt;
&lt;span class="py"&gt;eip3009_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"2"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;markup_bps&lt;/code&gt; field is the operator's margin, defined in basis points (1 bp = 0.01%). Set it to 0 for break-even pricing. Set it to 500 (5%) if you want a buffer against exchange rate fluctuation. Set it to 2000 (20%) if you're running premium infrastructure and the market bears it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;rate_per_native_unit&lt;/code&gt; is a static config value. There's no oracle feed or automatic rate refresh built into the engine. If ETH/USDC moves 10% in a day, your prices are 10% stale until you update the config. This is an explicit design choice: operators own their exchange rate assumptions. For operators running high-volume services, a cron job that polls a price feed (Chainlink, CoinGecko, or a centralized exchange API) and rewrites the TOML is the practical solution. The &lt;code&gt;Arc&amp;lt;Mutex&amp;lt;&amp;gt;&amp;gt;&lt;/code&gt; runtime config described below makes it possible to update rates without restarting the server.&lt;/p&gt;

&lt;h2&gt;
  
  
  On-chain representation: the scale factor
&lt;/h2&gt;

&lt;p&gt;USD amounts need to live on-chain as integers. The engine uses a &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/pricing-engine/src/utils.rs" rel="noopener noreferrer"&gt;10^9 scale factor&lt;/a&gt; where 1 USD = 1,000,000,000 atomic units.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;PRICING_SCALE_PLACES&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u32&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;decimal_to_scaled_amount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Decimal&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;U256&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;scaled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;pricing_scale&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;&lt;span class="nf"&gt;.trunc&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;int_value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaled&lt;/span&gt;&lt;span class="nf"&gt;.to_u128&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;U256&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;int_value&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Zero prices are explicitly rejected. If you want a free tier, the engine forces you to implement that logic explicitly rather than setting a price of zero. This is a deliberate choice: a price of zero is ambiguous (is it free? is it misconfigured?), while explicit free-tier logic is unambiguous.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust nothing: the anti-abuse stack
&lt;/h2&gt;

&lt;p&gt;A pricing API that returns unsigned numbers over unauthenticated RPC invites abuse. The pricing engine layers four defenses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxfqkisymwjcora8r9ew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxfqkisymwjcora8r9ew.png" alt="Anti-Abuse Stack" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Proof of work on every request
&lt;/h3&gt;

&lt;p&gt;Before the pricing engine will even process an RPC request, the client must solve a SHA-256 proof of work challenge. The default difficulty requires 20 leading zero bits. The challenge input is &lt;code&gt;SHA256(blueprint_id || timestamp)&lt;/code&gt;, and the timestamp must be within 30 seconds of server time.&lt;/p&gt;

&lt;p&gt;The point is to make it expensive to spam the quoting endpoint. An attacker trying to scrape prices across thousands of configurations has to burn real CPU time for each request.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. EIP-712 signed quotes
&lt;/h3&gt;

&lt;p&gt;Every quote the engine returns is signed using EIP-712 structured data with domain &lt;code&gt;TangleQuote&lt;/code&gt; version &lt;code&gt;1&lt;/code&gt;. Service quotes commit the &lt;code&gt;totalCost&lt;/code&gt;, &lt;code&gt;blueprintId&lt;/code&gt;, &lt;code&gt;ttlBlocks&lt;/code&gt;, security commitments, and resource commitments. Job quotes commit &lt;code&gt;serviceId&lt;/code&gt;, &lt;code&gt;jobIndex&lt;/code&gt;, &lt;code&gt;price&lt;/code&gt;, &lt;code&gt;timestamp&lt;/code&gt;, and &lt;code&gt;expiry&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A client can present a quote to a smart contract and the contract can verify that (a) the operator actually issued it, (b) it hasn't been tampered with, and (c) it covers exactly the resources and price claimed. No trust in the relay, the client, or any middleware.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Configurable expiry with a hard ceiling
&lt;/h3&gt;

&lt;p&gt;Quotes have a configurable validity duration (&lt;code&gt;quote_validity_duration_secs&lt;/code&gt;, default 5 minutes). The on-chain contract enforces a separate &lt;code&gt;maxQuoteAge&lt;/code&gt; (default 1 hour) as a hard ceiling. Even if an operator sets quote validity to a week, the contract won't accept anything older than an hour.&lt;/p&gt;

&lt;p&gt;This two-layer expiry means operators can tune freshness for their use case (tighter for volatile prices, looser for stable services) while the protocol prevents ancient quotes from being replayed.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. On-chain replay protection
&lt;/h3&gt;

&lt;p&gt;After a quote is submitted on-chain, its digest is marked as used. Presenting the same signed quote twice fails. Combined with the expiry window, this creates a tight lifecycle: a quote is valid for minutes, usable exactly once, and cryptographically bound to specific parameters.&lt;/p&gt;

&lt;h2&gt;
  
  
  From static config to dynamic pricing
&lt;/h2&gt;

&lt;p&gt;The pricing system is designed as a stack that operators can climb incrementally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nwx7krd8bz9by76sqh5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nwx7krd8bz9by76sqh5.png" alt="From Static Config to Dynamic Pricing" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 1: Static TOML.&lt;/strong&gt; Start with a &lt;code&gt;default_pricing.toml&lt;/code&gt; and &lt;code&gt;job_pricing.toml&lt;/code&gt;. Set your rates, deploy, done. This covers most operators who want simple, predictable pricing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 2: Per-blueprint overrides.&lt;/strong&gt; Add sections for specific blueprint IDs with different rates. Still static config, but now you can price premium workloads differently from commodity ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 3: Runtime mutation.&lt;/strong&gt; The job pricing config lives behind an &lt;code&gt;Arc&amp;lt;Mutex&amp;lt;&amp;gt;&amp;gt;&lt;/code&gt;, which means it can be updated programmatically at runtime without restarting the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;job_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load_job_pricing_from_toml&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;read_to_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"job_pricing.toml"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;PricingEngineService&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;with_job_pricing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;benchmark_cache&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pricing_config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nn"&gt;Arc&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;job_config&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="n"&gt;signer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An operator could wire this to a monitoring system that adjusts prices based on load, time of day, or competitive dynamics. The config structure doesn't change; only the values behind the mutex do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 4: x402 settlement with markup.&lt;/strong&gt; Layer on cross-chain stablecoin settlement with per-token exchange rates and markup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="nf"&gt;.with_x402_settlement&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x402_config&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The builder pattern (&lt;code&gt;with_job_pricing().with_subscription_pricing().with_x402_settlement()&lt;/code&gt;) makes each level of sophistication an additive change rather than a rewrite.&lt;/p&gt;

&lt;h2&gt;
  
  
  Standalone quote signing
&lt;/h2&gt;

&lt;p&gt;Not every operator needs the full pricing engine. For simple cases where an operator knows the price and just needs a signed quote, the &lt;a href="https://github.com/tangle-network/blueprint/blob/main/crates/tangle-extra/src/job_quote.rs" rel="noopener noreferrer"&gt;&lt;code&gt;JobQuoteSigner&lt;/code&gt;&lt;/a&gt; can be used directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_tangle_extra&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;job_quote&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;
    &lt;span class="n"&gt;JobQuoteSigner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;JobQuoteDetails&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;QuoteSigningDomain&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;signer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;JobQuoteSigner&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;keypair&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;QuoteSigningDomain&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;chain_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;verifying_contract&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;signed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;signer&lt;/span&gt;&lt;span class="nf"&gt;.sign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;JobQuoteDetails&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;service_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;job_index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;U256&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;250_000_000_000_000_000u64&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="c1"&gt;// 0.25 ETH&lt;/span&gt;
    &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;expiry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is useful for operators who compute prices through their own logic (ML-based pricing, auction mechanisms, manual overrides) and just need the signing infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sharp edges worth knowing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Exchange rates are your responsibility.&lt;/strong&gt; The &lt;code&gt;rate_per_native_unit&lt;/code&gt; in your token config is static. If you're accepting USDC and pricing in wei, a 15% ETH price swing means your USD-equivalent revenue shifts by 15%. High-volume operators should automate rate updates from a price feed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gas costs aren't factored in.&lt;/strong&gt; The pricing engine calculates the cost of compute, not the cost of submitting the quote on-chain. For expensive jobs (0.25 ETH), gas is a rounding error. For cheap jobs (0.001 ETH), gas might be a meaningful fraction of the price. Operators should account for this in their markup or base prices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benchmarks measure the machine, not the blueprint.&lt;/strong&gt; The benchmark system profiles the operator's hardware: CPU speed, memory bandwidth, storage throughput. It doesn't measure what a specific blueprint actually consumes during execution. If your blueprint uses 10% of available CPU, the benchmark still reflects 100% capacity. The &lt;code&gt;count&lt;/code&gt; field in resource config is where operators specify expected per-unit resource consumption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The security factor is a placeholder.&lt;/strong&gt; &lt;code&gt;calculate_security_rate_adjustment()&lt;/code&gt; returns &lt;code&gt;Decimal::ONE&lt;/code&gt; today. The function signature and its position in the pricing formula signal that security-weighted pricing is planned, but it doesn't affect prices yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting it together
&lt;/h2&gt;

&lt;p&gt;The pricing engine is built on a specific thesis: blockchain API pricing should be as structured and auditable as cloud pricing, but with cryptographic guarantees that cloud providers don't offer. An AWS pricing page tells you what things cost. A signed EIP-712 quote proves what things cost, who said so, when they said it, and that nobody changed the numbers in transit.&lt;/p&gt;

&lt;p&gt;For operators, the practical upshot is a system that starts simple (edit a TOML file, set your rates) and scales to sophisticated (dynamic pricing, multi-token settlement, per-blueprint overrides) without architectural changes. The pricing formula is transparent, the conversion math is explicit, and the anti-abuse stack is built in rather than bolted on.&lt;/p&gt;

&lt;p&gt;The next article in this series covers the settlement flow end-to-end: what happens after the client receives a signed quote, how the x402 payment proof is constructed, and how the on-chain verification contract validates everything before compute starts.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How do operators decide what to charge per job in wei?
&lt;/h3&gt;

&lt;p&gt;The engine doesn't prescribe a specific method. Operators set the &lt;code&gt;(service_id, job_index) → wei_amount&lt;/code&gt; mapping in &lt;code&gt;job_pricing.toml&lt;/code&gt; based on their own cost analysis. For compute-heavy jobs, estimate your per-call infrastructure cost (GPU time, memory, network), convert to wei at current ETH rates, and add your desired margin. The &lt;code&gt;markup_bps&lt;/code&gt; field in the x402 config gives you a separate knob for margin on the settlement side, so you can keep base prices at cost and let markup handle profit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can an operator accept multiple tokens on different chains?
&lt;/h3&gt;

&lt;p&gt;Yes, configure one &lt;code&gt;[[accepted_tokens]]&lt;/code&gt; block per token/chain combination. Each block has its own exchange rate, markup, decimals, and pay-to address. A single operator can accept USDC on Base, USDT on Ethereum, and DAI on Arbitrum simultaneously. The conversion formula runs independently for each token, so different decimals and rates are handled automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  What happens if a quote expires before the client submits it?
&lt;/h3&gt;

&lt;p&gt;The on-chain contract rejects it. Quotes default to 5 minutes of validity (configurable via &lt;code&gt;quote_validity_duration_secs&lt;/code&gt;), and the contract enforces a hard 1-hour &lt;code&gt;maxQuoteAge&lt;/code&gt; ceiling regardless of the operator's setting. Clients should treat quotes as short-lived and re-fetch before submission if their pipeline has latency.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does the proof-of-work difficulty affect client experience?
&lt;/h3&gt;

&lt;p&gt;At 20 leading zero bits (the default), solving takes roughly tens of milliseconds on modern hardware. Legitimate clients making occasional requests won't notice it. Attackers trying to scrape or spam the quoting endpoint at thousands of requests per second will. Operators can adjust the difficulty parameter if their threat model or client base requires it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I use the pricing engine without x402 settlement?
&lt;/h3&gt;

&lt;p&gt;Yes. The x402 settlement layer is added via the &lt;code&gt;with_x402_settlement()&lt;/code&gt; builder method and is entirely optional. Without it, the engine produces quotes denominated in the native unit (wei for jobs, scaled USD for resources) and operators handle settlement through their own mechanism or the standard on-chain &lt;code&gt;submitJob&lt;/code&gt; flow.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Build with Tangle&lt;/strong&gt; | &lt;a href="https://tangle.tools" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://github.com/tangle-network" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://discord.gg/tangle" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; | &lt;a href="https://t.me/tanglenet" rel="noopener noreferrer"&gt;Telegram&lt;/a&gt; | &lt;a href="https://x.com/taborgroup" rel="noopener noreferrer"&gt;X/Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>x402</category>
      <category>blockchain</category>
      <category>pricing</category>
      <category>tangle</category>
    </item>
    <item>
      <title>The x402 Facilitator Problem: Removing the Centralized Trust Bottleneck</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Mon, 23 Mar 2026 01:10:48 +0000</pubDate>
      <link>https://dev.to/tangle_network/the-x402-facilitator-problem-removing-the-centralized-trust-bottleneck-107</link>
      <guid>https://dev.to/tangle_network/the-x402-facilitator-problem-removing-the-centralized-trust-bottleneck-107</guid>
      <description>&lt;h1&gt;
  
  
  The x402 Facilitator Problem
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://tangle.tools/blog/decentralizing-x402-facilitator" rel="noopener noreferrer"&gt;tangle.tools&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  A Permissionless Protocol with a Permissioned Bottleneck
&lt;/h2&gt;

&lt;p&gt;HTTP status code 402 sat dormant for 27 years before Coinbase and Cloudflare's x402 protocol gave it a purpose: a client hits an endpoint, receives pricing info in a 402 response, signs a stablecoin payment off-chain, and resends the request with cryptographic proof of payment. No API keys. No billing dashboards. No invoices. Just math proving money moved before compute burned. But every x402 payment today routes through a single centralized HTTP service called the facilitator, which verifies and settles stablecoin transfers with zero cryptographic proof of correctness. This means a compromised facilitator can fabricate settlements, censor valid payments, or take down every x402-gated service by going offline. The fix is a phased migration: first verify the facilitator's claims against on-chain receipts, then move signature validation into the operator's gateway, then have operators submit settlement transactions directly on L2s where gas costs are sub-cent, and finally enforce payment-job atomicity through on-chain slashing.&lt;/p&gt;

&lt;p&gt;The protocol itself is clever. EIP-3009's &lt;code&gt;transferWithAuthorization&lt;/code&gt; lets clients sign USDC transfers without spending gas. The server verifies the signature, settles the payment on-chain, and serves the response. For Blueprint operators, this means any job can become a paid HTTP endpoint with a TOML config change. But between the client's signed payment and the operator's job execution sits that single facilitator service, and it deserves a lot more scrutiny than it gets.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Three Failure Modes
&lt;/h3&gt;

&lt;p&gt;The x402 facilitator problem is the set of risks created by routing all payments through a single unverified HTTP service. Concretely, there are three failure modes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Fabrication.&lt;/strong&gt; A compromised facilitator can claim payments settled when they didn't. The operator runs the job, burns compute, and receives nothing. The gateway never checks the chain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Censorship.&lt;/strong&gt; The facilitator can refuse to verify or settle valid payments. Since it's the only settlement path, a censoring facilitator locks clients out of services entirely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Downtime.&lt;/strong&gt; If &lt;code&gt;https://facilitator.x402.rs&lt;/code&gt; goes offline, every x402-gated Blueprint stops accepting payments. Not because anything is wrong with the operator, the client, or the blockchain. Because one HTTP endpoint is unreachable.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A protocol built for permissionless infrastructure has a permissioned chokepoint at its economic core. This post breaks down exactly where the trust assumptions hide, why the incentive structure is wrong, and what a concrete path to removing the bottleneck looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Facilitator Actually Does
&lt;/h2&gt;

&lt;p&gt;The facilitator is an HTTP service that exposes three endpoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST /v2/x402/verify    — confirm a client's signed payment is valid
POST /v2/x402/settle    — execute the on-chain token transfer
GET  /v2/x402/supported — list supported chains and tokens
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a client sends a request with a payment header, the operator's gateway calls verify first, then settle. If both succeed, the job runs. The entire flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Client ──► x402 HTTP Server ──► Payment Verified ──► JobCall injected
              (axum + x402-axum)     (facilitator)       (Producer stream)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Blueprint's gateway, the facilitator is wired in as middleware:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nn"&gt;X402Middleware&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.config.facilitator_url&lt;/span&gt;&lt;span class="nf"&gt;.as_str&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="nf"&gt;.settle_before_execution&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the facilitator itself is configured with a single field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;facilitator_url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"https://facilitator.x402.rs"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One URL. No fallback. No quorum. No alternative.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trusted by Default, Verified by Nobody
&lt;/h2&gt;

&lt;p&gt;Look at how the gateway handles the facilitator's settlement response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;parse_settlement_details&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;HeaderMap&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;SettlementDetails&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;raw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="nf"&gt;.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HEADER_SETTLEMENT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="nf"&gt;.as_bytes&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;decoded&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Base64Bytes&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.decode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.ok&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;settlement&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;SettleResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;serde_json&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;decoded&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.ok&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;match&lt;/span&gt; &lt;span class="n"&gt;settlement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nn"&gt;SettleResponse&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Success&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;payer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;network&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;..&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="cm"&gt;/* trusted */&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nn"&gt;SettleResponse&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;network&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;..&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="cm"&gt;/* trusted */&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The gateway decodes a base64 blob from a response header, parses it as JSON, and acts on whatever it says. There is no signature over the settlement response. No on-chain receipt verification. No transaction hash lookup. The facilitator says "payment settled," and the gateway believes it.&lt;/p&gt;

&lt;p&gt;Other decentralized payment relay systems don't work this way. Lightning Network nodes never trust a routing peer's claim that a payment forwarded successfully. Every hop produces a cryptographic preimage that proves the payment reached its destination, and the sender can verify this proof independently. Meta-transaction relayers like OpenGSN require relayers to post bonds and produce on-chain receipts that the relayed transaction executed; if a relayer claims to have relayed but didn't, the bond is forfeitable. The x402 facilitator offers neither cryptographic proofs nor economic bonds. It occupies a design space where the operator simply takes the facilitator's word for it.&lt;/p&gt;

&lt;p&gt;The protocol spec is explicitly facilitator-agnostic. The config takes any URL. But in practice, one implementation exists, operated by one entity, with no cryptographic accountability for its outputs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Incentive Misalignment
&lt;/h2&gt;

&lt;p&gt;Beyond the trust gap, the current architecture puts the wrong party in control of payment verification.&lt;/p&gt;

&lt;p&gt;Consider who has skin in the game. The operator runs infrastructure, pays for compute, and needs to be paid for jobs. The client wants a service and is willing to pay. The facilitator? The facilitator is a third party with no economic stake in either side of the transaction.&lt;/p&gt;

&lt;p&gt;Today's operator config includes economics that are entirely honor-system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[[accepted_tokens]]&lt;/span&gt;
&lt;span class="py"&gt;network&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"eip155:8453"&lt;/span&gt;
&lt;span class="py"&gt;asset&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"&lt;/span&gt;
&lt;span class="py"&gt;symbol&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"USDC"&lt;/span&gt;
&lt;span class="py"&gt;decimals&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;
&lt;span class="py"&gt;pay_to&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0xYourOperatorAddressOnBase"&lt;/span&gt;
&lt;span class="py"&gt;rate_per_native_unit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"3200.00"&lt;/span&gt;
&lt;span class="py"&gt;markup_bps&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;markup_bps&lt;/code&gt; field (200 basis points, a 2% markup) is set locally in TOML. Nothing enforces it on-chain. The quote registry that prices jobs is ephemeral and in-memory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nn"&gt;QuoteRegistry&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_secs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="py"&gt;.quote_ttl_secs&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;  &lt;span class="c1"&gt;// default 300s TTL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Quotes live for five minutes, in RAM, on one machine. There's no audit trail. No way for a client to prove they were quoted a different price than what was settled. No way for an operator to prove the facilitator settled the correct amount.&lt;/p&gt;

&lt;p&gt;This is an architecture where the party verifying payment correctness is the party with the least reason to care about payment correctness.&lt;/p&gt;

&lt;h2&gt;
  
  
  The On-Chain Pieces Already Exist
&lt;/h2&gt;

&lt;p&gt;Most of the hard work is already done. The missing piece isn't cryptography or smart contracts. It's a different arrangement of components that already exist.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification is already a client-side operation
&lt;/h3&gt;

&lt;p&gt;EIP-3009's &lt;code&gt;transferWithAuthorization&lt;/code&gt; produces a signature over a structured payload: from, to, value, validAfter, validBefore, nonce. Anyone with access to the token contract can verify this signature. You don't need a facilitator to tell you a payment authorization is valid. You need &lt;code&gt;ecrecover&lt;/code&gt; and a balance check.&lt;/p&gt;

&lt;p&gt;The gateway already makes &lt;code&gt;eth_call&lt;/code&gt; for other purposes. Restricted job access control works exactly this way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// eth_call dry-run against Tangle contract&lt;/span&gt;
&lt;span class="nf"&gt;isPermittedCaller&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uint64&lt;/span&gt; &lt;span class="n"&gt;service_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;address&lt;/span&gt; &lt;span class="n"&gt;caller&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is an on-chain verification that the gateway performs directly, without trusting a third party. The same pattern applies to payment verification: call the token contract, check the authorization signature, confirm the balance covers the amount.&lt;/p&gt;

&lt;h3&gt;
  
  
  Settlement is a single transaction
&lt;/h3&gt;

&lt;p&gt;The facilitator's settle step submits a &lt;code&gt;transferWithAuthorization&lt;/code&gt; call to the USDC (or DAI) contract. This is one transaction. The authorization is already signed by the client. Any address with gas can submit it. There is nothing about this transaction that requires a privileged intermediary.&lt;/p&gt;

&lt;p&gt;The facilitator exists because someone needs to pay gas for settlement, and having the client do it would add friction. But operators already run Tangle nodes. They already have RPC access. They already have funded accounts for on-chain operations. Submitting one additional transaction per paid job is operationally trivial for infrastructure that's already making &lt;code&gt;eth_call&lt;/code&gt; for access control.&lt;/p&gt;

&lt;h3&gt;
  
  
  What a local facilitator looks like
&lt;/h3&gt;

&lt;p&gt;Replace the HTTP call to &lt;code&gt;facilitator.x402.rs&lt;/code&gt; with a local module that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Validates the &lt;code&gt;transferWithAuthorization&lt;/code&gt; signature against the token contract using &lt;code&gt;eth_call&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Checks the payer's balance covers the payment amount&lt;/li&gt;
&lt;li&gt;Submits the &lt;code&gt;transferWithAuthorization&lt;/code&gt; transaction to the chain&lt;/li&gt;
&lt;li&gt;Waits for the transaction receipt&lt;/li&gt;
&lt;li&gt;Returns the receipt (with transaction hash) to the gateway&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Steps 1-2 replace the facilitator's verify endpoint. Steps 3-5 replace settle. The gateway already has RPC access for &lt;code&gt;isPermittedCaller&lt;/code&gt;. The same provider handles payment settlement.&lt;/p&gt;

&lt;p&gt;The config change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="c"&gt;# Before: trust a third party&lt;/span&gt;
&lt;span class="py"&gt;facilitator_url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"https://facilitator.x402.rs"&lt;/span&gt;

&lt;span class="c"&gt;# After: verify and settle locally&lt;/span&gt;
&lt;span class="py"&gt;facilitator_mode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"local"&lt;/span&gt;
&lt;span class="py"&gt;settlement_rpc&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"https://base-mainnet.g.alchemy.com/v2/..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No new cryptography. No new smart contracts. Just cutting out the middleman from operations the gateway can already perform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Removing Trust Assumptions, One Layer at a Time
&lt;/h2&gt;

&lt;p&gt;Ripping out the facilitator in one shot would be reckless. The right approach removes trust assumptions incrementally. Each phase is independently deployable and backward-compatible, and each one closes a specific class of attack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: A compromised facilitator can no longer fabricate settlements
&lt;/h3&gt;

&lt;p&gt;Keep the facilitator in the loop, but stop trusting it blindly. After the gateway receives a &lt;code&gt;SettleResponse::Success&lt;/code&gt;, check the chain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// After facilitator claims settlement succeeded&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;receipt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="nf"&gt;.get_transaction_receipt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tx_hash&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;match&lt;/span&gt; &lt;span class="n"&gt;receipt&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="py"&gt;.status&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="cm"&gt;/* confirmed on-chain, proceed */&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="cm"&gt;/* facilitator lied or tx failed, reject */&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a read-only change. The facilitator still does the work. But the gateway independently confirms that work actually happened. After this phase, the fabrication attack vector is closed: a facilitator that claims "payment settled" when it didn't gets caught by the receipt check. The operator never runs a job without on-chain proof.&lt;/p&gt;

&lt;p&gt;This adds one &lt;code&gt;eth_call&lt;/code&gt; per payment, roughly 50-100ms on L2s. Negligible for job execution workloads where the job itself takes seconds to minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: The facilitator can no longer censor valid payments
&lt;/h3&gt;

&lt;p&gt;Move signature validation and balance checking into the gateway. The facilitator's verify endpoint becomes redundant. The gateway calls the facilitator only for settlement (the part that costs gas), and then confirms the settlement on-chain per Phase 1.&lt;/p&gt;

&lt;p&gt;The trust surface shrinks here. Before Phase 2, a censoring facilitator could silently refuse to verify valid payments, and neither the operator nor the client would know whether the refusal was legitimate. After Phase 2, the gateway knows independently whether a payment is valid. If the facilitator refuses to settle a payment the gateway has already verified, the operator can log the discrepancy, alert on it, or fall back to self-settlement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3: The facilitator is no longer in the path at all
&lt;/h3&gt;

&lt;p&gt;The gateway submits &lt;code&gt;transferWithAuthorization&lt;/code&gt; transactions directly. The &lt;code&gt;facilitator_url&lt;/code&gt; field is removed from config entirely. The operator pays gas for settlement, a few cents on L2s like Base, and recoups it through &lt;code&gt;markup_bps&lt;/code&gt;. That field, which was previously just a number in a TOML file with no enforcement, now reflects a real cost the operator bears and a real margin they control.&lt;/p&gt;

&lt;p&gt;After this phase, operator availability depends only on the operator's own infrastructure and the underlying chain. No third-party HTTP service can take down payment processing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 4: Misbehavior becomes economically irrational
&lt;/h3&gt;

&lt;p&gt;Move the quote registry on-chain so pricing is transparent and auditable. Enforce &lt;code&gt;markup_bps&lt;/code&gt; in a smart contract rather than TOML config. Add slashing conditions: if an operator accepts payment (verifiable on-chain via the &lt;code&gt;transferWithAuthorization&lt;/code&gt; receipt) but doesn't execute the job (verifiable via Tangle's job result reporting), their stake gets slashed.&lt;/p&gt;

&lt;p&gt;The Tangle staking and slashing infrastructure already exists for Blueprint operators. Extending it to cover payment-job atomicity is an extension of existing infrastructure, not a new primitive. The trust model shifts from "trust the facilitator, then trust the operator" to "misbehavior by anyone is economically punishable."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gas Question
&lt;/h2&gt;

&lt;p&gt;The obvious objection: if operators settle payments themselves, they pay gas. The facilitator abstracts this cost today.&lt;/p&gt;

&lt;p&gt;On Ethereum mainnet, this is a real concern. A &lt;code&gt;transferWithAuthorization&lt;/code&gt; call costs roughly 50,000-80,000 gas. At 30 gwei and $3,200 ETH, that's $5-8 per settlement. For a $0.50 API call, the economics don't work.&lt;/p&gt;

&lt;p&gt;But x402 Blueprint deployments primarily target L2s. On Base, the same transaction costs fractions of a cent. The operator config in the sources explicitly targets Base:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;network&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"eip155:8453"&lt;/span&gt;  &lt;span class="c"&gt;# Base&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At L2 gas prices, self-settlement adds negligible cost. The &lt;code&gt;markup_bps&lt;/code&gt; field exists precisely to let operators cover their costs. An operator running on Base who sets &lt;code&gt;markup_bps = 200&lt;/code&gt; (2%) on a $1.00 job call earns $0.02 in markup, more than enough to cover a sub-cent gas cost.&lt;/p&gt;

&lt;p&gt;For mainnet settlement, a batching approach works: accumulate multiple &lt;code&gt;transferWithAuthorization&lt;/code&gt; signatures and submit them in a single multicall transaction, amortizing gas across many payments. This adds latency (the operator extends credit for the batching window) but keeps gas costs manageable.&lt;/p&gt;

&lt;p&gt;When operators submit &lt;code&gt;transferWithAuthorization&lt;/code&gt; transactions through public RPCs, they enter the mempool where searchers can observe them. But the practical MEV risk is negligible: &lt;code&gt;transferWithAuthorization&lt;/code&gt; transfers tokens from a specific address to a specific address with a specific nonce, so there's no extractable value from front-running or redirecting the funds. On L2s where gas is already cheap, sandwich profits would be trivial. For operators who want additional protection, private RPC endpoints (Flashbots Protect on mainnet, equivalent services on L2s) solve this without architectural changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Blueprint-as-Facilitator Model
&lt;/h2&gt;

&lt;p&gt;There's a clear endgame. Facilitator logic (verify signature, check balance, submit transaction, return receipt) is a stateless, deterministic service. That's exactly what Blueprints are designed to run.&lt;/p&gt;

&lt;p&gt;A "Facilitator Blueprint" would be a Tangle service that operators opt into. Instead of each operator running their own settlement logic, a set of staked operators collectively provide facilitation as a service. Payment verification happens through the same job execution and result reporting infrastructure that all Blueprints use. Incorrect facilitation (claiming a payment settled when it didn't) is caught by Tangle's result verification and punished via slashing.&lt;/p&gt;

&lt;p&gt;This turns the facilitator from a trusted singleton into a decentralized service with cryptoeconomic guarantees. The trust model shifts from "Coinbase won't misbehave" to "misbehavior is economically irrational because the facilitator's stake exceeds the value of any individual payment."&lt;/p&gt;

&lt;p&gt;The config for an operator using this model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;facilitator_mode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"blueprint"&lt;/span&gt;
&lt;span class="py"&gt;facilitator_service_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;  &lt;span class="c"&gt;# Facilitator Blueprint service&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The gateway calls the Facilitator Blueprint through Tangle's job submission, receives a signed result with an on-chain receipt, and proceeds. Same flow, different trust model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Remaining Open Questions
&lt;/h2&gt;

&lt;p&gt;Two areas deserve deeper treatment than this post can give them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-chain complexity.&lt;/strong&gt; The current facilitator handles settlement across Base, Ethereum, Polygon, Arbitrum, and Optimism. Each chain has different gas markets, confirmation times, and token contract addresses. A self-settling operator needs RPC endpoints and funded accounts on every chain they accept payment on. This is manageable but not free, and the operational burden scales with the number of supported chains.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Permit2 path.&lt;/strong&gt; EIP-3009 (&lt;code&gt;transferWithAuthorization&lt;/code&gt;) is natively supported by USDC and DAI with clean, self-contained signatures. The Permit2 fallback, used for tokens without native meta-transaction support, involves a different approval flow. The decentralization path described here applies most cleanly to EIP-3009 tokens. Permit2 settlement through a decentralized facilitator is feasible but adds complexity that deserves its own analysis.&lt;/p&gt;

&lt;p&gt;Note that the &lt;a href="https://github.com/coinbase/x402" rel="noopener noreferrer"&gt;x402 protocol repository&lt;/a&gt; contains reference facilitator code, but Coinbase's production deployment at &lt;code&gt;facilitator.x402.rs&lt;/code&gt; may differ from the open-source reference. Operators building local facilitator logic should work from the protocol spec and reference implementation, not assume the production service behaves identically. For operators in regulated jurisdictions, note that the centralized facilitator may serve compliance functions (KYC/AML screening) that pure on-chain settlement would bypass. The decentralization path here targets permissionless contexts; compliance-sensitive operators can still enforce policy through transparent, auditable on-chain logic rather than opaque HTTP responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Convenience to Correctness
&lt;/h2&gt;

&lt;p&gt;The x402 facilitator was the right design for bootstrapping the protocol. Coinbase hosting a reliable settlement service removed friction and let the ecosystem focus on building applications rather than payment infrastructure. That's how protocols grow: centralize the hard parts, prove the concept, then decentralize.&lt;/p&gt;

&lt;p&gt;But the protocol is past the bootstrap phase. Real operators are running real services with real money flowing through a single point of trust. The gateway code is architecturally ready for decentralization. It already makes &lt;code&gt;eth_call&lt;/code&gt; for on-chain verification. The EIP-3009 signatures it processes are publicly verifiable. The L2s it targets make self-settlement economically viable.&lt;/p&gt;

&lt;p&gt;The migration path is incremental: verify the facilitator's claims, then verify payments locally, then settle locally, then move the whole thing on-chain with cryptoeconomic guarantees. Each step is independently valuable. Each step removes a specific trust assumption. None of them require breaking changes to the x402 protocol itself.&lt;/p&gt;

&lt;p&gt;The protocol's original insight was right: HTTP 402 should mean "pay and retry." The next step is making sure "pay" doesn't require asking permission from a single server in Virginia.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Can operators run a local facilitator today without any protocol changes?&lt;/strong&gt;&lt;br&gt;
Yes. The &lt;code&gt;facilitator_url&lt;/code&gt; config field accepts any URL, so an operator can deploy their own facilitator service that implements the same three endpoints (&lt;code&gt;verify&lt;/code&gt;, &lt;code&gt;settle&lt;/code&gt;, &lt;code&gt;supported&lt;/code&gt;). The &lt;a href="https://github.com/coinbase/x402" rel="noopener noreferrer"&gt;x402 reference implementation&lt;/a&gt; provides a starting point. The phased approach described here goes further by eliminating the facilitator abstraction entirely, but self-hosting is possible right now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens to in-flight payments during a migration from centralized to local facilitation?&lt;/strong&gt;&lt;br&gt;
Each phase is backward-compatible. In Phase 1 (receipt verification), the facilitator still handles settlement; the gateway just adds a check afterward. An operator can switch modes between jobs with no coordination required. Payments in progress complete through whichever facilitator mode was active when the job started.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does self-settlement affect payment finality guarantees for the client?&lt;/strong&gt;&lt;br&gt;
It improves them. Today, the client trusts the facilitator to settle honestly, with no proof. With operator self-settlement, the operator has direct economic incentive to settle correctly (they don't get paid otherwise), and the on-chain receipt is publicly verifiable by both parties. The client can independently confirm their payment landed by checking the transaction hash.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why not skip straight to Phase 4 (on-chain enforcement)?&lt;/strong&gt;&lt;br&gt;
Phase 4 requires smart contract development, slashing parameter tuning, and governance decisions about stake requirements. Phases 1-3 are pure gateway logic changes that a single operator can deploy independently. The incremental approach lets operators capture most of the security benefit immediately while the protocol-level work proceeds in parallel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does this break compatibility with clients built against the current x402 spec?&lt;/strong&gt;&lt;br&gt;
No. The client-side protocol is unchanged across all four phases. Clients still send the same payment headers, receive the same 402 responses, and sign the same EIP-3009 authorizations. The decentralization happens entirely on the server side, in how the gateway processes and settles those payments.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Build with Tangle&lt;/strong&gt; | &lt;a href="https://tangle.tools" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://github.com/tangle-network" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://discord.gg/tangle" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; | &lt;a href="https://t.me/tanglenet" rel="noopener noreferrer"&gt;Telegram&lt;/a&gt; | &lt;a href="https://x.com/taborgroup" rel="noopener noreferrer"&gt;X/Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>x402</category>
      <category>blockchain</category>
      <category>payments</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How Blueprint SDK Turns x402 Payments into Runnable Jobs</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Sat, 21 Mar 2026 01:13:47 +0000</pubDate>
      <link>https://dev.to/tangle_network/how-blueprint-sdk-turns-x402-payments-into-runnable-jobs-2hjm</link>
      <guid>https://dev.to/tangle_network/how-blueprint-sdk-turns-x402-payments-into-runnable-jobs-2hjm</guid>
      <description>&lt;h2&gt;
  
  
  HTTP Status 402 Finally Has a Job
&lt;/h2&gt;

&lt;p&gt;HTTP status code 402 has been "reserved for future use" since 1999. For twenty-seven years it sat in the spec, a placeholder for a payments web that never materialized. In 2025, Coinbase and Cloudflare's &lt;a href="https://x402.org" rel="noopener noreferrer"&gt;x402 protocol&lt;/a&gt; gave it real work: a client hits an endpoint, receives a 402 response with pricing information, signs a stablecoin payment, and resends the request with proof of settlement attached as an HTTP header. No API keys, no billing dashboard, no monthly invoices. Just cryptographic proof that money moved before compute burned.&lt;/p&gt;

&lt;p&gt;Blueprint SDK integrates the x402 payment protocol through its &lt;code&gt;blueprint_x402&lt;/code&gt; crate, which runs an axum HTTP server that verifies x402 payment headers via an external facilitator, settles payments on-chain before execution, and converts each verified payment into a standard &lt;code&gt;JobCall&lt;/code&gt;. If you're new to the blueprint model, &lt;a href="https://dev.to/post/how-blueprints-work"&gt;How Blueprints Work&lt;/a&gt; covers the architecture. Job handlers work identically whether triggered by x402 payments on Base, on-chain Tangle events, or cron. You write a function that takes bytes and returns bytes, set prices in a TOML config, and the gateway handles payment verification, multi-chain exchange rate conversion, access control, and replay protection. Integration requires adding the &lt;code&gt;blueprint_x402&lt;/code&gt; dependency, configuring accepted tokens and job prices in two TOML files, and wiring the &lt;code&gt;X402Gateway&lt;/code&gt; and &lt;code&gt;X402Producer&lt;/code&gt; into &lt;code&gt;BlueprintRunner&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This post walks through how that works: the payment verification flow, the price conversion pipeline, the producer abstraction that makes it composable, and the configuration that ties it together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration Checklist
&lt;/h3&gt;

&lt;p&gt;Before the deep dive, here's the end-to-end wiring at a glance:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add &lt;code&gt;blueprint_x402&lt;/code&gt; as a dependency (requires Blueprint SDK and Rust 2021 edition)&lt;/li&gt;
&lt;li&gt;Create &lt;code&gt;x402.toml&lt;/code&gt; with facilitator URL, accepted tokens, and per-job access policies&lt;/li&gt;
&lt;li&gt;Create &lt;code&gt;job_pricing.toml&lt;/code&gt; with base prices in wei for each job&lt;/li&gt;
&lt;li&gt;Initialize &lt;code&gt;X402Gateway&lt;/code&gt; and &lt;code&gt;X402Producer&lt;/code&gt; from the config&lt;/li&gt;
&lt;li&gt;Wire the producer and gateway into &lt;code&gt;BlueprintRunner&lt;/code&gt; alongside your job router&lt;/li&gt;
&lt;li&gt;(Optional) Configure a &lt;code&gt;CachedRateProvider&lt;/code&gt; for live exchange rates&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each step is covered in detail below.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Credit-Based APIs
&lt;/h2&gt;

&lt;p&gt;If you've built a paid API before, you know the pattern. Users sign up, get an API key, load credits into an account, and you decrement their balance on each call. It works, but it comes with a billing system, an accounts database, dispute resolution, fraud detection, and a support queue for "why was I charged twice."&lt;/p&gt;

&lt;p&gt;For infrastructure operators &lt;a href="https://dev.to/post/building-ai-services-on-tangle"&gt;building AI services on Tangle&lt;/a&gt;, this overhead is significant. You want to expose a compute job over HTTP and get paid for it. You don't want to run a SaaS billing platform on the side.&lt;/p&gt;

&lt;p&gt;x402 removes the account layer entirely. The client proves payment in the HTTP request itself. The server verifies that proof before executing anything. There's no balance to track, no credits to reconcile, no API key to rotate. Each request is a self-contained economic transaction.&lt;/p&gt;

&lt;h2&gt;
  
  
  x402 Payment Protocol Integration with Blueprint SDK
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Settlement Before Execution
&lt;/h3&gt;

&lt;p&gt;Blueprint SDK makes an opinionated design choice: the operator gets paid before doing any work.&lt;/p&gt;

&lt;p&gt;The gateway uses &lt;code&gt;X402Middleware&lt;/code&gt; (in &lt;code&gt;blueprint_x402::gateway&lt;/code&gt;) configured with &lt;code&gt;.settle_before_execution()&lt;/code&gt;. When a request arrives at &lt;code&gt;/x402/jobs/{service_id}/{job_index}&lt;/code&gt;, the middleware extracts the &lt;code&gt;X-PAYMENT&lt;/code&gt; header, sends it to an external &lt;strong&gt;facilitator&lt;/strong&gt; (by default, &lt;code&gt;https://facilitator.x402.org&lt;/code&gt;), and the facilitator verifies the payment signature and settles it on-chain. Only after settlement confirms does the request proceed to the job handler.&lt;/p&gt;

&lt;p&gt;This is the opposite of the typical API pattern where you serve the response and bill afterward. The tradeoff is latency: settlement adds a round-trip to the facilitator plus on-chain confirmation time. The benefit is that operators never perform free work and never chase unpaid invoices.&lt;/p&gt;

&lt;p&gt;The facilitator is a trust boundary worth understanding. The gateway itself does not verify on-chain transactions. It delegates that to the facilitator and trusts the response. For production deployments, you'll want to evaluate whether the public facilitator at &lt;code&gt;x402.org&lt;/code&gt; meets your trust requirements, or whether you need to run your own.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Translation Layer: From Payment to JobCall
&lt;/h3&gt;

&lt;p&gt;The central design decision in &lt;code&gt;blueprint_x402&lt;/code&gt; is how it connects payments to the existing job system. Rather than building a separate execution path for paid requests, it converts every verified payment into a &lt;code&gt;JobCall&lt;/code&gt; (defined in &lt;code&gt;blueprint_sdk::core::job::call&lt;/code&gt;), the same type that every other producer in the system emits.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;JobCall&lt;/code&gt; is a pair: &lt;code&gt;Parts&lt;/code&gt; (containing a &lt;code&gt;JobId&lt;/code&gt;, a &lt;code&gt;MetadataMap&lt;/code&gt; of headers, and an &lt;code&gt;Extensions&lt;/code&gt; map) plus a body as raw &lt;code&gt;Bytes&lt;/code&gt;. When the x402 middleware verifies a payment, it constructs a &lt;code&gt;VerifiedPayment&lt;/code&gt; and converts it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;VerifiedPayment&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;service_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;job_index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;job_args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Bytes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;quote_digest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;payment_network&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// "eip155:8453"&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;payment_token&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c1"&gt;// "USDC"&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;call_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;caller&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Option&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Address&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The conversion injects x402-specific metadata into the &lt;code&gt;MetadataMap&lt;/code&gt; (payment network, token, quote digest, synthetic call ID) and passes the original request body through as job arguments. Downstream, the job handler receives this as a normal &lt;code&gt;JobCall&lt;/code&gt;. If it cares about payment metadata, it can inspect the headers. If it doesn't, it just reads the body.&lt;/p&gt;

&lt;p&gt;This means a job handler like this works identically whether triggered by x402 or any other source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;echo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Bytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Bytes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;body&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Bytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Bytes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nn"&gt;Bytes&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;copy_from_slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;alloy_primitives&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;keccak256&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.as_slice&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;router&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Router&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nn"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Job&lt;/code&gt; trait uses an extractor pattern borrowed from axum. Any async function with the right signature auto-implements &lt;code&gt;Job&lt;/code&gt;. You can also use &lt;code&gt;FromJobCall&lt;/code&gt; and &lt;code&gt;FromJobCallParts&lt;/code&gt; extractors to destructure arguments ergonomically, and apply Tower middleware like &lt;code&gt;ConcurrencyLimitLayer&lt;/code&gt; via &lt;code&gt;Job::layer()&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Producer Model
&lt;/h3&gt;

&lt;p&gt;The mechanism connecting the HTTP gateway to the job router is &lt;code&gt;X402Producer&lt;/code&gt;, which implements &lt;code&gt;Stream&amp;lt;Item = Result&amp;lt;JobCall, BoxError&amp;gt;&amp;gt;&lt;/code&gt;. It's backed by an unbounded &lt;code&gt;mpsc&lt;/code&gt; channel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;X402Producer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;rx&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;mpsc&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;UnboundedReceiver&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;VerifiedPayment&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="n"&gt;X402Producer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nn"&gt;mpsc&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;UnboundedSender&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;VerifiedPayment&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;mpsc&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;unbounded_channel&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;Self&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;rx&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="n"&gt;tx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The gateway sends verified payments into the channel. &lt;code&gt;BlueprintRunner&lt;/code&gt; polls the producer alongside its other producers (on-chain events, cron, webhooks) and dispatches each &lt;code&gt;JobCall&lt;/code&gt; through the same router. This is why the handler doesn't know its trigger source: all producers converge into one stream.&lt;/p&gt;

&lt;p&gt;A note on the unbounded channel: under a payment flood, this queue grows without limit. That's a deliberate simplicity-over-backpressure tradeoff. For production deployments handling high throughput, monitor queue depth via the &lt;code&gt;/x402/stats&lt;/code&gt; endpoint, which exposes counters for accepted, rejected, and enqueued calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing: Wei, Exchange Rates, and Oracle Composition
&lt;/h2&gt;

&lt;p&gt;Job prices are denominated in &lt;strong&gt;wei&lt;/strong&gt;, the smallest unit of the native chain currency. This might seem odd for services priced in stablecoins, but it solves a real problem: operators accept multiple tokens across multiple chains, and wei provides a denomination-neutral base unit. The gateway converts to token amounts at request time.&lt;/p&gt;

&lt;p&gt;The conversion formula (from &lt;code&gt;blueprint_x402::config&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;token_amount = (wei / 10^18) × rate_per_native_unit × (1 + markup_bps / 10000) × 10^decimals
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A concrete example: you price your echo job at &lt;code&gt;1000000000000000&lt;/code&gt; wei (0.001 ETH). With a rate of 3200 USDC per ETH and a 2% markup (200 basis points): &lt;code&gt;0.001 × 3200 × 1.02 = 3.264 USDC&lt;/code&gt;, represented as &lt;code&gt;3264000&lt;/code&gt; in USDC's 6-decimal smallest units.&lt;/p&gt;

&lt;p&gt;Pricing configuration lives in two TOML files. &lt;code&gt;job_pricing.toml&lt;/code&gt; sets base prices per job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[1]&lt;/span&gt;
&lt;span class="py"&gt;0&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"1000000000000000"&lt;/span&gt;       &lt;span class="c"&gt;# echo: 0.001 ETH&lt;/span&gt;
&lt;span class="py"&gt;1&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"10000000000000000"&lt;/span&gt;      &lt;span class="c"&gt;# hash: 0.01 ETH&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And &lt;code&gt;x402.toml&lt;/code&gt; configures accepted tokens with their exchange rates and markup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[[accepted_tokens]]&lt;/span&gt;
&lt;span class="py"&gt;network&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"eip155:8453"&lt;/span&gt;
&lt;span class="py"&gt;asset&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"&lt;/span&gt;
&lt;span class="py"&gt;symbol&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"USDC"&lt;/span&gt;
&lt;span class="py"&gt;decimals&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;
&lt;span class="py"&gt;pay_to&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0xYourOperatorAddressOnBase"&lt;/span&gt;
&lt;span class="py"&gt;rate_per_native_unit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"3200.00"&lt;/span&gt;
&lt;span class="py"&gt;markup_bps&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a client calls the price endpoint, it gets back a response with settlement options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"service_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"job_index"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"options"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"network"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"eip155:8453"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"asset"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"symbol"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"USDC"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"amount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3264000"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"pay_to"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0xYourOperatorAddressOnBase"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"quote_digest"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0xabc..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ttl_secs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;quote_digest&lt;/code&gt; ties this price to a specific calculation. The client includes it in the payment, and the gateway matches it against the &lt;code&gt;QuoteRegistry&lt;/code&gt; to ensure the payment covers the quoted amount.&lt;/p&gt;

&lt;p&gt;Static rates in a config file are fine for getting started, but production operators need live pricing. The &lt;code&gt;PriceOracle&lt;/code&gt; trait and its implementations handle this. You can compose oracles: load base prices from TOML, apply a &lt;code&gt;ScaledPriceOracle&lt;/code&gt; for surge pricing (say, 1.5x during peak hours), and keep exchange rates fresh via &lt;code&gt;CachedRateProvider&amp;lt;CoinbaseOracle&amp;gt;&lt;/code&gt; with a 60-second TTL. Chainlink and Uniswap V3 TWAP oracles are also available behind feature flags. None of this touches your job handler code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Access Modes
&lt;/h2&gt;

&lt;p&gt;Not every job should be publicly callable. The gateway supports three invocation modes per job, configured in &lt;code&gt;x402.toml&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;Disabled&lt;/code&gt;&lt;/strong&gt; (the default): the job isn't exposed via x402 at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;PublicPaid&lt;/code&gt;&lt;/strong&gt;: anyone can call, as long as they pay. No identity checks beyond the payment itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;RestrictedPaid&lt;/code&gt;&lt;/strong&gt;: payment required, AND the caller must pass an on-chain &lt;code&gt;isPermittedCaller&lt;/code&gt; check. The gateway makes an &lt;code&gt;eth_call&lt;/code&gt; dry-run against the Tangle contract to verify the caller is authorized for this service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[[job_policies]]&lt;/span&gt;
&lt;span class="py"&gt;service_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="py"&gt;job_index&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="py"&gt;invocation_mode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"public_paid"&lt;/span&gt;

&lt;span class="nn"&gt;[[job_policies]]&lt;/span&gt;
&lt;span class="py"&gt;service_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="py"&gt;job_index&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="py"&gt;invocation_mode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"restricted_paid"&lt;/span&gt;
&lt;span class="py"&gt;caller_auth&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"delegated_caller_signature"&lt;/span&gt;
&lt;span class="py"&gt;tangle_rpc_url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"https://rpc.tangle.tools"&lt;/span&gt;
&lt;span class="py"&gt;tangle_contract&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0x..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For &lt;code&gt;RestrictedPaid&lt;/code&gt;, the gateway needs to know who the caller is. Two options: &lt;strong&gt;&lt;code&gt;PayerIsCaller&lt;/code&gt;&lt;/strong&gt; infers identity from the payment's payer address, and &lt;strong&gt;&lt;code&gt;DelegatedCallerSignature&lt;/code&gt;&lt;/strong&gt; lets a third party pay on behalf of a caller. In the delegated case, the caller signs a set of headers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;X-TANGLE-CALLER: 0x&amp;lt;address&amp;gt;
X-TANGLE-CALLER-SIG: &amp;lt;signature&amp;gt;
X-TANGLE-CALLER-NONCE: &amp;lt;unique-nonce&amp;gt;
X-TANGLE-CALLER-EXPIRY: &amp;lt;unix-timestamp&amp;gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The signature payload is &lt;code&gt;x402-authorize:{service_id}:{job_index}:{keccak(body)_hex}:{nonce}:{expiry}&lt;/code&gt;. A &lt;code&gt;DelegatedReplayGuard&lt;/code&gt; tracks used nonces to prevent replay attacks.&lt;/p&gt;

&lt;p&gt;This matters for agent-to-agent scenarios where a coordinator agent pays for compute on behalf of worker agents that have their own on-chain identities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Double-Spend and Replay Protection
&lt;/h2&gt;

&lt;p&gt;Two mechanisms prevent payment reuse. The &lt;code&gt;QuoteRegistry&lt;/code&gt; (in &lt;code&gt;blueprint_x402::quote_registry&lt;/code&gt;) tracks outstanding price quotes and atomically consumes them when a payment arrives. Each quote has a TTL (configurable, default 300 seconds). Attempting to reuse a consumed quote returns &lt;code&gt;None&lt;/code&gt;, and the request is rejected. The registry uses &lt;code&gt;DashMap&lt;/code&gt; for lock-free concurrent reads.&lt;/p&gt;

&lt;p&gt;One production consideration: the &lt;code&gt;QuoteRegistry&lt;/code&gt; is in-memory. If the gateway restarts, all outstanding quotes are lost. Clients will need to re-request price quotes after a restart. For most deployments this is fine (quotes are short-lived), but it's worth knowing if you're running behind a load balancer with rolling restarts.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;DelegatedReplayGuard&lt;/code&gt; provides separate replay protection for delegated caller signatures, tracking nonces and rejecting expired or reused ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wiring It All Together
&lt;/h2&gt;

&lt;p&gt;Here's what a complete Blueprint setup looks like with x402 enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_runner&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;BlueprintRunner&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_x402&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;X402Gateway&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X402Config&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;x402_blueprint&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;router&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;load_job_pricing&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;x402_blueprint&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;oracle&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;CoinbaseOracle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CachedRateProvider&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;refresh_rates&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// Load gateway config and per-job pricing&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;X402Config&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_toml&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"x402.toml"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;pricing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load_job_pricing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;read_to_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"job_pricing.toml"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Initialize a cached exchange rate oracle&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;oracle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;CachedRateProvider&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;CoinbaseOracle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nn"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_secs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="nf"&gt;refresh_rates&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;oracle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"ETH"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Create the gateway and its producer&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x402_producer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;X402Gateway&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pricing&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Wire into the runner&lt;/span&gt;
&lt;span class="nn"&gt;BlueprintRunner&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;((),&lt;/span&gt; &lt;span class="nn"&gt;BlueprintEnvironment&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="nf"&gt;.router&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;router&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="nf"&gt;.producer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x402_producer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.background_service&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The gateway runs as a background service. The producer feeds &lt;code&gt;JobCall&lt;/code&gt;s into the runner. The router dispatches to handlers. Each piece is independent: you can swap the oracle, add jobs to the router, or change access policies without touching the others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Observability
&lt;/h2&gt;

&lt;p&gt;The gateway exposes &lt;code&gt;GatewayCounters&lt;/code&gt; at &lt;code&gt;GET /x402/stats&lt;/code&gt;, tracking: accepted requests, policy denials, policy errors, replay rejections, enqueue failures, job-not-found errors, quote conflicts, and auth dry-run results (allowed, denied, error). These are the numbers you'll want in your monitoring stack.&lt;/p&gt;

&lt;p&gt;The full endpoint surface:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Endpoint&lt;/th&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/x402/health&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;Health check&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/x402/stats&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;Operator diagnostics&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/x402/jobs/{sid}/{jid}/price&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;Price discovery for clients&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/x402/jobs/{sid}/{jid}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;POST&lt;/td&gt;
&lt;td&gt;Execute job (x402 payment required)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/x402/jobs/{sid}/{jid}/auth-dry-run&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;POST&lt;/td&gt;
&lt;td&gt;Test caller authorization without paying&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;auth-dry-run&lt;/code&gt; endpoint is useful during integration: clients can verify their caller signature and on-chain permissions before committing real money.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Gets You
&lt;/h2&gt;

&lt;p&gt;x402 collapses the billing stack into the transport layer. Instead of maintaining user accounts, API keys, credit balances, and a reconciliation pipeline, you get a per-request payment proof verified before execution. Blueprint SDK's contribution is making that payment proof look identical to every other job trigger, so your handler code stays clean and your infrastructure stays composable.&lt;/p&gt;

&lt;p&gt;You write a function that takes bytes and returns bytes. You set a price in a TOML file. The gateway handles payment verification, exchange rate conversion, access control, replay protection, and dispatch. When a payment clears, your function runs. When it doesn't, nothing happens and you burned zero compute.&lt;/p&gt;

&lt;p&gt;For operators running AI agent services, the billing system is the blockchain, the API key is a wallet, and the invoice is a transaction receipt. To understand how Tangle &lt;a href="https://dev.to/post/how-tangle-verifies-work"&gt;verifies work&lt;/a&gt; beyond payment settlement, that post covers the full verification stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How does the client know what to pay?&lt;/strong&gt;&lt;br&gt;
The client calls &lt;code&gt;GET /x402/jobs/{service_id}/{job_index}/price&lt;/code&gt;, which returns available settlement options: accepted tokens, networks, amounts, and a quote with a TTL. The client picks an option, signs the payment, and includes it as an &lt;code&gt;X-PAYMENT&lt;/code&gt; header on the next request. Client libraries like &lt;code&gt;@x402/fetch&lt;/code&gt; handle this negotiation automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens if the gateway restarts mid-operation?&lt;/strong&gt;&lt;br&gt;
Outstanding price quotes are lost (the &lt;code&gt;QuoteRegistry&lt;/code&gt; is in-memory), so clients will get a stale-quote rejection and need to re-request pricing. Payments that have already settled on-chain are unaffected. Jobs already dispatched to the runner continue executing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I accept payments on multiple chains simultaneously?&lt;/strong&gt;&lt;br&gt;
Yes. The &lt;code&gt;[[accepted_tokens]]&lt;/code&gt; array in &lt;code&gt;x402.toml&lt;/code&gt; supports multiple entries across different networks (Base, Ethereum, Arbitrum, etc.) using CAIP-2 identifiers like &lt;code&gt;eip155:8453&lt;/code&gt;. Each entry specifies its own &lt;code&gt;pay_to&lt;/code&gt; address, exchange rate, and markup. The client chooses which option to use from the price endpoint response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the latency overhead of x402 compared to a normal API call?&lt;/strong&gt;&lt;br&gt;
The main addition is the facilitator round-trip for payment verification and settlement. On Base (which has fast block times), this typically adds 1-3 seconds depending on the facilitator's confirmation requirements. The gateway processing itself (quote lookup, metadata injection, channel dispatch) adds negligible latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I use this without Tangle's on-chain job system?&lt;/strong&gt;&lt;br&gt;
The x402 gateway and producer are designed to plug into &lt;code&gt;BlueprintRunner&lt;/code&gt;, which is Tangle's job execution framework. &lt;code&gt;RestrictedPaid&lt;/code&gt; mode specifically depends on Tangle's on-chain permission contracts. &lt;code&gt;PublicPaid&lt;/code&gt; mode has a lighter dependency since it only needs the facilitator for payment verification, but the job dispatch still runs through the Blueprint runner infrastructure.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Build with Tangle&lt;/strong&gt; | &lt;a href="https://tangle.tools" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://github.com/tangle-network" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://discord.gg/tangle" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; | &lt;a href="https://t.me/tanglenet" rel="noopener noreferrer"&gt;Telegram&lt;/a&gt; | &lt;a href="https://x.com/taborgroup" rel="noopener noreferrer"&gt;X/Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>x402</category>
      <category>blockchain</category>
      <category>payments</category>
      <category>ai</category>
    </item>
    <item>
      <title>Trusted Execution on Tangle: How TEE Works in the Blueprint SDK</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Tue, 03 Mar 2026 06:52:21 +0000</pubDate>
      <link>https://dev.to/tangle_network/trusted-execution-on-tangle-how-tee-works-in-the-blueprint-sdk-4ad7</link>
      <guid>https://dev.to/tangle_network/trusted-execution-on-tangle-how-tee-works-in-the-blueprint-sdk-4ad7</guid>
      <description>&lt;p&gt;Previous posts covered &lt;a href="https://dev.to/why-ai-infrastructure-needs-decentralization"&gt;why decentralized AI infrastructure matters&lt;/a&gt;, &lt;a href="https://dev.to/how-blueprints-work"&gt;how blueprints work&lt;/a&gt;, &lt;a href="https://dev.to/how-tangle-verifies-work"&gt;verification mechanisms&lt;/a&gt;, &lt;a href="https://dev.to/building-on-tangle-from-idea-to-production"&gt;building from idea to production&lt;/a&gt;, and &lt;a href="https://dev.to/building-ai-services-on-tangle"&gt;AI services with inference and sandboxes&lt;/a&gt;. This post covers what just shipped: first-class TEE support in the Blueprint SDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem TEE solves for blueprints
&lt;/h2&gt;

&lt;p&gt;Verification in Day 3 covered how Tangle proves work was done correctly. But verification happens &lt;strong&gt;after&lt;/strong&gt; execution. TEE flips this: it proves the execution environment itself is trustworthy &lt;em&gt;before and during&lt;/em&gt; computation.&lt;/p&gt;

&lt;p&gt;For AI inference, this matters. A model running inside an AWS Nitro Enclave or an Azure Confidential VM can prove it's running the exact binary you expect, on hardware that isolates it from the operator. The operator can't read the model weights. They can't tamper with the inference. The hardware enforces this, not a smart contract.&lt;/p&gt;

&lt;p&gt;Tangle's TEE integration lets blueprint developers declare TEE requirements at the SDK level, and the runtime handles provisioning across AWS Nitro, GCP Confidential Space, Azure CVM, or direct hardware (Intel TDX, AMD SEV-SNP).&lt;/p&gt;

&lt;h2&gt;
  
  
  Three deployment modes
&lt;/h2&gt;

&lt;p&gt;The SDK supports three modes, each for a different operational model:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Direct mode.&lt;/strong&gt; The blueprint runner itself executes inside a TEE. Device passthrough gives it access to &lt;code&gt;/dev/tdx_guest&lt;/code&gt; or &lt;code&gt;/dev/sev-guest&lt;/code&gt;. The runner produces attestation by hashing its own binary. This is the highest-integrity path with the fewest network trust links.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;```rust title="src/main.rs"&lt;br&gt;
let tee = TeeConfig::builder()&lt;br&gt;
    .requirement(TeeRequirement::Required)&lt;br&gt;
    .mode(TeeMode::Direct)&lt;br&gt;
    .allow_providers([TeeProvider::IntelTdx])&lt;br&gt;
    .build()?;&lt;/p&gt;

&lt;p&gt;BlueprintRunner::builder(config, env)&lt;br&gt;
    .tee(tee)&lt;br&gt;
    .router(router)&lt;br&gt;
    .run()&lt;br&gt;
    .await?;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


**Remote mode.** The runner provisions workloads into cloud TEE instances. It calls the AWS EC2 API to launch Nitro Enclave instances, the GCP Compute API for Confidential Space VMs, or the Azure ARM API for DCasv5 CVMs. The runner manages the lifecycle; the workload runs in hardware isolation.

**Hybrid mode.** Some jobs route to TEE, some don't. A pricing engine might run in a standard container while the model inference runs in a Nitro Enclave. Routing is controlled by a policy file that maps job types to execution environments.

## How attestation flows through the system

When a TEE deployment starts, attestation follows this path:

1. The backend provisions the workload (EC2 instance with enclave, Confidential Space VM, etc.)
2. The sidecar inside the TEE reads hardware attestation (NSM document on Nitro, OIDC token from `teeserver.sock` on GCP, vTPM report on Azure)
3. The attestation report includes a measurement (hash of the running binary) and a timestamp
4. This report is cached in the deployment handle for idempotent re-submission
5. The on-chain contract stores `keccak256(attestationJsonBytes)` so anyone can verify

The `AttestationVerifier` trait lets you plug in verification logic per provider. Each provider has different evidence formats, but they all answer the same question: is this binary running on genuine TEE hardware?



```rust title="crates/tee/src/verifier.rs"
pub trait AttestationVerifier: Send + Sync {
    fn verify(
        &amp;amp;self,
        report: &amp;amp;AttestationReport,
        config: &amp;amp;TeeConfig,
    ) -&amp;gt; Result&amp;lt;VerifiedAttestation, TeeError&amp;gt;;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Built-in verifiers check provider type, debug mode flags, measurement hashes, and attestation freshness. The GCP verifier, for example, rejects debug-mode attestations unless explicitly configured to allow them (a security fix that shipped with this PR).&lt;/p&gt;
&lt;h2&gt;
  
  
  Sealed secrets: why container recreation is forbidden
&lt;/h2&gt;

&lt;p&gt;Standard Docker deployments inject secrets via environment variables. When a config changes, you recreate the container with new env vars.&lt;/p&gt;

&lt;p&gt;TEE deployments can't do this. Recreating the container invalidates the attestation, breaks sealed secrets, and loses the on-chain deployment ID. The SDK enforces this at the type level: any TEE-enabled config automatically sets &lt;code&gt;SecretInjectionPolicy::SealedOnly&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
Recreating a TEE container invalidates attestation, breaks sealed secrets, and loses the on-chain deployment ID. The SDK prevents this at the type level.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Instead, secrets flow through a key exchange protocol:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The TEE generates an X25519 key pair&lt;/li&gt;
&lt;li&gt;The public key is embedded in the attestation report&lt;/li&gt;
&lt;li&gt;Clients encrypt secrets to this key using ChaCha20-Poly1305&lt;/li&gt;
&lt;li&gt;Only the TEE holding the private key can decrypt&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;code&gt;TeeAuthService&lt;/code&gt; manages ephemeral key exchange sessions with configurable TTL and automatic cleanup. It runs as a background service alongside the blueprint runner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud backends: what each provider looks like
&lt;/h2&gt;

&lt;p&gt;The SDK includes real implementations for three cloud providers, not stubs or mocks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Nitro&lt;/strong&gt; launches EC2 instances with &lt;code&gt;EnclaveOptions: true&lt;/code&gt;, generates user-data scripts that configure &lt;code&gt;nitro-cli&lt;/code&gt;, sets up vsock proxies for communication between the parent instance and the enclave, and polls &lt;code&gt;DescribeInstances&lt;/code&gt; until the enclave is healthy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GCP Confidential Space&lt;/strong&gt; creates Compute Engine VMs with &lt;code&gt;confidentialInstanceConfig&lt;/code&gt; and &lt;code&gt;tee-image-reference&lt;/code&gt; metadata. The Confidential Space launcher auto-pulls the container image, starts it inside the TEE, and exposes OIDC attestation tokens via a Unix socket. Supports both AMD SEV-SNP (N2D machines) and Intel TDX (C3 machines).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure CVM&lt;/strong&gt; provisions Confidential VMs (DCasv5/ECasv5 series) through the ARM REST API, retrieves attestation from Microsoft Azure Attestation (MAA), and supports Secure Key Release from Key Vault. The HCL generates ephemeral RSA key pairs sealed to the vTPM.&lt;/p&gt;

&lt;p&gt;All three are feature-gated so the default build stays lightweight:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;```toml title="Cargo.toml"&lt;br&gt;
[features]&lt;br&gt;
aws-nitro = ["dep:aws-sdk-ec2", "dep:aws-config"]&lt;br&gt;
gcp-confidential = ["dep:gcp-auth", "dep:reqwest"]&lt;br&gt;
azure-snp = ["dep:reqwest"]&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


## Blueprint requirements: telling the manager you need TEE

A blueprint declares its TEE needs through `TeeRequirements`, which the blueprint manager inspects at deploy time to route the workload to an appropriate host:



```rust {1-3,5-8}
let requirements = TeeRequirements {
    requirement: TeeRequirement::Required,
    providers: TeeProviderSelector::AllowList(vec![
        TeeProvider::AwsNitro,
        TeeProvider::GcpConfidential,
    ]),
    min_attestation_age_secs: Some(3600),
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is how the manager knows to deploy on TEE-capable infrastructure rather than a standard Docker host. The &lt;code&gt;requirement&lt;/code&gt; field controls whether TEE is mandatory (fail if unavailable) or preferred (degrade gracefully). The &lt;code&gt;providers&lt;/code&gt; field narrows which cloud backends are acceptable.&lt;/p&gt;
&lt;h2&gt;
  
  
  What's real and what's next
&lt;/h2&gt;

&lt;p&gt;Everything described above is shipped and tested (162 tests across attestation, config, exchange, middleware, and runtime). The cloud backends make real API calls to provision real VMs.&lt;/p&gt;

&lt;p&gt;What's coming next:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Periodic attestation refresh.&lt;/strong&gt; Re-attest on a schedule and update the on-chain hash, catching enclave reboots and measurement drift.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contract-driven hybrid routing.&lt;/strong&gt; Read the &lt;code&gt;teeRequired&lt;/code&gt; flag from the on-chain contract instead of a local policy file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hardware-specific attestation.&lt;/strong&gt; TDX TDREPORT via &lt;code&gt;/dev/tdx_guest&lt;/code&gt; ioctl, SEV-SNP report via &lt;code&gt;/dev/sev-guest&lt;/code&gt;. Currently the direct backend uses software measurement (binary hash); hardware attestation integration requires platform SDK work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;br&gt;
If you're building a blueprint that handles sensitive data, model weights, or private inference, TEE is how you prove to users that their data stays confidential. The SDK handles multi-cloud provisioning, attestation verification, and sealed secret management so your blueprint code stays focused on the service logic.&lt;br&gt;
&lt;/p&gt;




&lt;p&gt;&lt;a href="https://github.com/tangle-network/blueprint" rel="noopener noreferrer"&gt;Blueprint SDK on GitHub&lt;/a&gt; · &lt;a href="https://github.com/tangle-network/blueprint/tree/main/crates/tee" rel="noopener noreferrer"&gt;TEE crate source&lt;/a&gt; · &lt;a href="https://dev.to/building-ai-services-on-tangle"&gt;Previous: Building AI Services on Tangle&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tee</category>
      <category>security</category>
      <category>sdk</category>
    </item>
    <item>
      <title>Building AI Services on Tangle: Inference and Sandboxes</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Sun, 01 Mar 2026 03:36:54 +0000</pubDate>
      <link>https://dev.to/tangle_network/building-ai-services-on-tangle-inference-and-sandboxes-25hf</link>
      <guid>https://dev.to/tangle_network/building-ai-services-on-tangle-inference-and-sandboxes-25hf</guid>
      <description>&lt;h1&gt;
  
  
  Building AI Services on Tangle: Inference and Sandboxes
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Day 5 of the Tangle Re-Introduction Series&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;An AI agent just analyzed your private financial data, generated a Python script, executed it, and returned investment recommendations. How do you know the model was real? That your data wasn't logged? That the code ran correctly? You don't. That's the problem Tangle solves.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Picture: Inference + Sandbox, Chained
&lt;/h2&gt;

&lt;p&gt;Tangle chains verified inference and sandboxed execution into one accountable pipeline for AI agents.&lt;/p&gt;

&lt;p&gt;Most posts about verifiable AI focus on one piece. We're starting with the combined pattern because that's what agents actually do in practice: reason, then act.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Agent receives task
  → Inference service generates code (verified model, TEE-attested)
  → Sandbox service executes code (isolated, resource-tracked)
  → Agent returns verified result with full accountability chain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what this looks like as a Tangle blueprint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;tangle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TangleLayer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;tangle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;extract&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;TangleResult&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="cd"&gt;/// Agent that analyzes data using generated code&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;analyze&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nf"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;TangleArg&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;AnalysisRequest&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;TangleResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;AnalysisResult&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Step 1: Generate analysis code via inference&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;code_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;inference&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;
        &lt;span class="s"&gt;"gpt-4"&lt;/span&gt;&lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="nd"&gt;format!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Write Python to analyze this data: {:?}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="py"&gt;.schema&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="nn"&gt;InferenceConfig&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;)))&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;// Step 2: Execute the generated code in sandbox&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;exec_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;
        &lt;span class="n"&gt;code_response&lt;/span&gt;&lt;span class="py"&gt;.text&lt;/span&gt;&lt;span class="nf"&gt;.clone&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="s"&gt;"python3"&lt;/span&gt;&lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="py"&gt;.data&lt;/span&gt;&lt;span class="nf"&gt;.clone&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="nn"&gt;SandboxConfig&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;)))&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nf"&gt;TangleResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;AnalysisResult&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;exec_result&lt;/span&gt;&lt;span class="py"&gt;.stdout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;code_used&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;code_response&lt;/span&gt;&lt;span class="py"&gt;.text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;model_hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;code_response&lt;/span&gt;&lt;span class="py"&gt;.model_hash&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The customer gets the analysis result, the code that produced it, and the model hash proving which model generated the code. Full accountability chain, two services, one request. Tangle is a purpose-built infrastructure where AI inference and code execution both carry cryptoeconomic guarantees.&lt;/p&gt;

&lt;p&gt;Now let's look at how each piece works.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Trust Problem
&lt;/h2&gt;

&lt;p&gt;Current AI inference APIs require blind trust in the provider.&lt;/p&gt;

&lt;p&gt;When you call an inference API, you're trusting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They're running the model they claim (not a cheaper substitute)&lt;/li&gt;
&lt;li&gt;They're not logging or selling your prompts&lt;/li&gt;
&lt;li&gt;They're not modifying outputs (filtering, biasing, watermarking)&lt;/li&gt;
&lt;li&gt;They're actually running inference (not returning cached or fabricated responses)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most inference APIs ask you to trust their reputation. Tangle makes these properties verifiable. And the same class of problems applies to code execution: operators could observe your data, modify results, or lie about resource usage.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Payment Works: x402
&lt;/h2&gt;

&lt;p&gt;Agents pay per-request using x402 HTTP payment headers, no accounts or API keys required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;x402&lt;/strong&gt; is an HTTP-native micropayment protocol that lets AI agents pay for compute with a signed payment header, settling transactions on-chain in seconds. This is where Tangle diverges from other "verifiable compute" projects. Agents don't sign up for accounts or manage API keys. They pay per-call using x402 payment headers, the HTTP 402 protocol for machine-to-machine payments.&lt;/p&gt;

&lt;p&gt;The flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Agent sends request with x402 payment header (signed token amount)
  → Operator verifies payment is sufficient for the job
  → Executes job inside TEE
  → Returns result + attestation
  → Payment settles automatically on-chain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In practice, the agent's HTTP request includes a payment header alongside the normal job payload. The following is pseudocode illustrating the x402 flow (no official Python SDK exists yet):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Pseudocode -- conceptual x402 flow
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://operator.example/inference&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-402-Payment&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;sign_payment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.003&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;asset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TNT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyze this portfolio...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;config&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2048&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Response includes result + attestation
&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="nf"&gt;verify_attestation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;attestation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means agents can autonomously discover operators, compare prices, pay for compute, and verify results. No human in the loop. The economic layer and the verification layer are the same system: if an operator cheats, they lose their stake. If they deliver, they get paid. x402 makes this settlement automatic.&lt;/p&gt;




&lt;h2&gt;
  
  
  Inference Service
&lt;/h2&gt;

&lt;p&gt;Verifiable AI inference runs a model inside a TEE with cryptographic proof of execution.&lt;/p&gt;

&lt;p&gt;An inference service runs AI models on behalf of customers. The customer sends a prompt, the operator runs it through the model, and returns the response with proof.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Blueprint
&lt;/h3&gt;

&lt;p&gt;The inference blueprint defines request types, runs the model, and returns an attested response.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The inference logic below is application code. The SDK provides the service framework (Router, TangleArg, TangleResult, TangleLayer) while you bring the AI logic. Model loading and attestation are your responsibility, not SDK types.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;tangle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TangleLayer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;tangle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;extract&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;TangleResult&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;serde&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;Deserialize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Serialize&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// --- Application types (your code, not SDK types) ---&lt;/span&gt;

&lt;span class="nd"&gt;#[derive(Serialize,&lt;/span&gt; &lt;span class="nd"&gt;Deserialize)]&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;InferenceConfig&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;f32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;top_p&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;f32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nd"&gt;#[derive(Serialize,&lt;/span&gt; &lt;span class="nd"&gt;Deserialize)]&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;InferenceResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;tokens_used&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;model_hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cd"&gt;/// Your model loading logic -- bring your own inference runtime&lt;/span&gt;
&lt;span class="cd"&gt;/// (e.g., candle, llama.cpp bindings, or an external API client)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nn"&gt;anyhow&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;MyModel&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Application code: load weights, verify hash, etc.&lt;/span&gt;
    &lt;span class="nd"&gt;todo!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Implement model loading for your runtime"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cd"&gt;/// Job 0: Run inference on a prompt&lt;/span&gt;
&lt;span class="cd"&gt;///&lt;/span&gt;
&lt;span class="cd"&gt;/// The SDK handles job routing and result submission.&lt;/span&gt;
&lt;span class="cd"&gt;/// You handle model loading and inference execution.&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;inference&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nf"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt; &lt;span class="n"&gt;TangleArg&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;InferenceConfig&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;TangleResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;InferenceResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;
        &lt;span class="nf"&gt;.expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"model loading failed"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="nf"&gt;.generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="py"&gt;.max_tokens&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="py"&gt;.temperature&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;hash&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="nf"&gt;.weight_hash&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="nf"&gt;TangleResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;InferenceResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="py"&gt;.text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;tokens_used&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="py"&gt;.token_count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;model_hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;router&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Router&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nn"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;inference&lt;/span&gt;&lt;span class="nf"&gt;.layer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TangleLayer&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;

&lt;p&gt;Tangle combines TEE attestation, multi-operator consensus, and on-chain model hashes.&lt;/p&gt;

&lt;p&gt;When TEE hardware is available, every response can include an attestation signed by the enclave. This proves code ran inside an enclave, a specific model binary was loaded, and the hardware is genuine. Customers verify attestations client-side. An on-chain &lt;strong&gt;model registry&lt;/strong&gt; that maps model identifiers to their verified weight hashes is currently in development. Tangle is a purpose-built platform combining multi-operator verification with cryptoeconomic settlement via x402, giving agents a single request that pays for compute and verifies the result.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-operator consensus.&lt;/strong&gt; For additional security, configure inference to require multiple operators. If three operators independently run the same prompt and two must agree, an operator running a cheaper substitute model gets caught. This works today, even without TEE hardware.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hardware Reality
&lt;/h3&gt;

&lt;p&gt;TEE-based inference works but has memory and GPU constraints today.&lt;/p&gt;

&lt;p&gt;SGX memory is limited (~256MB), AMD SEV-SNP is more generous but still bounded, and GPU TEE support (NVIDIA H100 Confidential Computing) exists but isn't widely deployed. For now, TEE attestation covers model loading and result signing, with GPU computation verified through multi-operator consensus. This is a real tradeoff, not a solved problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  What This Doesn't Solve
&lt;/h3&gt;

&lt;p&gt;Verification proves fidelity and execution integrity, not output quality or model safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output quality.&lt;/strong&gt; We verify the right model ran. We don't verify the output is "good." Quality is subjective.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt injection.&lt;/strong&gt; If the model itself has been fine-tuned maliciously, the TEE faithfully runs the malicious model. Verification proves fidelity, not safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Side-channel leakage.&lt;/strong&gt; TEEs have known side-channel vulnerabilities. For most use cases, this risk is acceptable. For state-level adversaries, it isn't.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sandbox Service
&lt;/h2&gt;

&lt;p&gt;Sandboxed execution runs untrusted code in an isolated container with strict resource limits.&lt;/p&gt;

&lt;p&gt;A sandbox service executes arbitrary code in an isolated environment. The customer sends code, the operator runs it, and returns the result.&lt;/p&gt;

&lt;p&gt;Sandboxes need isolation in both directions: protecting operators from malicious customer code, and protecting customers from operators who might observe data or tamper with results.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Blueprint
&lt;/h3&gt;

&lt;p&gt;The sandbox blueprint accepts code, a language, inputs, and a config, then executes in isolation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; VM-level sandboxing is infrastructure provided by the Blueprint Manager (using the &lt;code&gt;vm-sandbox&lt;/code&gt; feature with cloud-hypervisor), not an application-level API. The types below are application code; the SDK provides the service framework.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;tangle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TangleLayer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;tangle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;extract&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;TangleResult&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;serde&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;Deserialize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Serialize&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;process&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// --- Application types (your code, not SDK types) ---&lt;/span&gt;

&lt;span class="nd"&gt;#[derive(Serialize,&lt;/span&gt; &lt;span class="nd"&gt;Deserialize)]&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;SandboxConfig&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;max_memory_mb&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;max_cpu_seconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;allow_network&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nd"&gt;#[derive(Serialize,&lt;/span&gt; &lt;span class="nd"&gt;Deserialize)]&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;ExecutionResult&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="n"&gt;exit_code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;i32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cd"&gt;/// Job 0: Execute code in isolated environment&lt;/span&gt;
&lt;span class="cd"&gt;///&lt;/span&gt;
&lt;span class="cd"&gt;/// The operator's infrastructure handles VM-level isolation.&lt;/span&gt;
&lt;span class="cd"&gt;/// This function implements the execution logic within that sandbox.&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nf"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;language&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt; &lt;span class="n"&gt;TangleArg&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;Vec&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SandboxConfig&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;TangleResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ExecutionResult&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Application code: spawn a subprocess with resource limits&lt;/span&gt;
    &lt;span class="c1"&gt;// VM-level isolation is handled by the Blueprint Manager's&lt;/span&gt;
    &lt;span class="c1"&gt;// vm-sandbox feature, not by application code&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;language&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;.arg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"-c"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;.arg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;.output&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"execution failed"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nf"&gt;TangleResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ExecutionResult&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;String&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_utf8_lossy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="py"&gt;.stdout&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nn"&gt;String&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_utf8_lossy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="py"&gt;.stderr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="n"&gt;exit_code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="py"&gt;.status&lt;/span&gt;&lt;span class="nf"&gt;.code&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.unwrap_or&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;router&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Router&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nn"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;execute&lt;/span&gt;&lt;span class="nf"&gt;.layer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TangleLayer&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the Blueprint Manager runs with the &lt;code&gt;vm-sandbox&lt;/code&gt; feature enabled, each execution runs in a fresh VM (powered by cloud-hypervisor) with configurable memory limits, CPU quotas, and network isolation. VMs are destroyed after execution. No state persists.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification Approaches
&lt;/h3&gt;

&lt;p&gt;Verification strategy depends on whether the workload is TEE-attested, deterministic, or general.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TEE-enabled execution.&lt;/strong&gt; Hardware attestation proves the sandbox ran the code correctly. Strongest guarantee, but requires TEE-capable operators.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deterministic code (WASM, seeded execution).&lt;/strong&gt; Replay verification. Run the same inputs through multiple operators and compare outputs. Exact match required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;General code (Python, Node).&lt;/strong&gt; Most real code isn't deterministic. Dict ordering, floating-point operations, and timing-dependent behavior vary across runs. For these workloads, multi-operator consensus with semantic similarity checking is the practical approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Honest limitation:&lt;/strong&gt; For workloads that are neither TEE-attested nor consensus-verifiable, economic security (operator stake at risk) is the primary deterrent. This is weaker than cryptographic verification, but often sufficient for lower-stakes computation.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Tangle AI Services Compare
&lt;/h3&gt;

&lt;p&gt;Tangle is the only platform combining TEE-attested inference, sandboxed execution, and x402 payments.&lt;/p&gt;

&lt;p&gt;Here's how the full stack compares to existing inference and execution platforms:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Tangle&lt;/th&gt;
&lt;th&gt;Together AI&lt;/th&gt;
&lt;th&gt;Replicate&lt;/th&gt;
&lt;th&gt;Ritual&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Inference verification&lt;/td&gt;
&lt;td&gt;TEE + multi-operator&lt;/td&gt;
&lt;td&gt;None (trust-based)&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;On-chain proof&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code execution&lt;/td&gt;
&lt;td&gt;Sandboxed + verified&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Container-based&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Payment model&lt;/td&gt;
&lt;td&gt;x402 micropayments&lt;/td&gt;
&lt;td&gt;API key + billing&lt;/td&gt;
&lt;td&gt;Per-prediction&lt;/td&gt;
&lt;td&gt;Token-based&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Model substitution protection&lt;/td&gt;
&lt;td&gt;Multi-operator consensus (canary prompts on roadmap)&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;ZK attestation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent-native&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Real Use Case: AI Agent Tool Execution
&lt;/h3&gt;

&lt;p&gt;Agents generate code via inference and execute it in a sandbox, getting verified results in one pipeline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Agent generates this code via inference service
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;analyze_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mean&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;std&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;std&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outliers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;std&lt;/span&gt;&lt;span class="p"&gt;()].&lt;/span&gt;&lt;span class="nf"&gt;to_dict&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The sandbox executes it safely, the agent gets results, and with TEE the operator can't see the data. Chain this with the inference service and you get the full pattern from the top of this post.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Shipped vs. What's Coming
&lt;/h2&gt;

&lt;p&gt;The Blueprint SDK, multi-operator verification, and x402 settlement are live; TEE integration is next.&lt;/p&gt;

&lt;p&gt;Developers deserve to know what works today and what's still being built. &lt;strong&gt;Blueprint SDK v0.1.0-alpha.22 (Rust 2024 edition, minimum Rust 1.88) enables building verifiable AI inference services in under 200 lines of Rust&lt;/strong&gt;, using the same Router and extractor patterns shown throughout this series.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shipped.&lt;/strong&gt; The Blueprint SDK (Router, TangleArg, TangleResult, TangleLayer), multi-operator verification, container isolation, and x402 payment settlement. You can build and deploy inference and sandbox services today using these primitives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Design complete, building now.&lt;/strong&gt; TEE attestation integration with the Blueprint SDK, and the on-chain model registry for hash-verified model loading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Roadmap.&lt;/strong&gt; A &lt;strong&gt;canary prompt&lt;/strong&gt; is a challenge input with a known expected output used to detect model substitution without the operator's knowledge. Canary prompts would run on a configurable interval, providing ongoing model verification without degrading throughput. Full GPU-in-TEE support will arrive as NVIDIA hardware matures. WASM deterministic replay will enable bit-exact verification across operators.&lt;/p&gt;

&lt;p&gt;The gap between "shipped" and "roadmap" is real. Multi-operator consensus and economic security work today. Hardware-level attestation is close. Full deterministic replay for arbitrary code is a harder research problem. We're building in that order because each layer adds value independently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;Common questions about Tangle's AI inference, sandbox execution, and verification services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is verifiable AI inference?&lt;/strong&gt;&lt;br&gt;
Verifiable AI inference is running a model inside a TEE so the customer receives cryptographic proof of which model executed their prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does TEE attestation work for AI models?&lt;/strong&gt;&lt;br&gt;
TEE hardware signs an attestation proving that a specific model binary loaded inside an isolated enclave the operator cannot observe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a sandboxed code execution service?&lt;/strong&gt;&lt;br&gt;
A sandboxed execution service runs untrusted code in an isolated container with no capabilities, no persistent state, and strict resource limits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does x402 payment work for AI services?&lt;/strong&gt;&lt;br&gt;
An agent sends an HTTP request with a signed payment header; the operator verifies payment, executes the job, and settlement happens on-chain automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can AI agents chain inference and code execution together?&lt;/strong&gt;&lt;br&gt;
Yes. A single blueprint can call the inference service to generate code and the sandbox service to execute it, returning a fully verified result chain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is canary prompt verification?&lt;/strong&gt;&lt;br&gt;
A canary prompt is a challenge input with a known expected output, sent periodically to detect whether an operator has substituted a cheaper model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does Tangle prevent model substitution?&lt;/strong&gt;&lt;br&gt;
Tangle uses multi-operator consensus (multiple operators must agree on results) and TEE hardware attestation to detect substitution. Weight hash verification via an on-chain model registry and canary prompts are on the roadmap.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Day 6 covers Tangle's roadmap, ecosystem positioning, and infrastructure bets.&lt;/p&gt;

&lt;p&gt;Day 6 covers where Tangle is headed: the features in the pipeline, where we fit in verifiable compute, and the bets we're making on where AI infrastructure goes next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start building:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.tangle.tools" rel="noopener noreferrer"&gt;Quickstart docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tangle-network/blueprint-template" rel="noopener noreferrer"&gt;Blueprint template repo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Join the conversation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://discord.gg/cv8EfJu3Tn" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/tangle_network" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>blockchain</category>
      <category>rust</category>
      <category>web3</category>
      <category>ai</category>
    </item>
    <item>
      <title>Building on Tangle: From Idea to Production Service</title>
      <dc:creator>Tangle Network</dc:creator>
      <pubDate>Sun, 01 Mar 2026 03:04:23 +0000</pubDate>
      <link>https://dev.to/tangle_network/building-on-tangle-from-idea-to-production-service-5g54</link>
      <guid>https://dev.to/tangle_network/building-on-tangle-from-idea-to-production-service-5g54</guid>
      <description>&lt;h1&gt;
  
  
  Building on Tangle: From Idea to Production Service
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Day 4 of the Tangle Re-Introduction Series&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;The previous posts covered why decentralized infrastructure matters and how verification works. This one is practical: how do you actually build something?&lt;/p&gt;

&lt;p&gt;Most "developer experience" posts in crypto are marketing dressed as documentation. They show a hello-world example, claim it's easy, and leave you to figure out the hard parts yourself. This post tries to be honest about what building on Tangle actually involves, where the rough edges are, and what the path to production looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You're Building
&lt;/h2&gt;

&lt;p&gt;You create a blueprint that operators run and customers pay for, earning a share of every transaction without managing infrastructure yourself.&lt;/p&gt;

&lt;p&gt;When you build on Tangle, you're creating a &lt;strong&gt;blueprint&lt;/strong&gt;: a template that defines a type of service. Operators register to run your blueprint. Customers pay to use instances of your service. You earn a share of every transaction.&lt;/p&gt;

&lt;p&gt;This is different from traditional SaaS:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Traditional SaaS&lt;/th&gt;
&lt;th&gt;Tangle Blueprint&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;You run the infrastructure&lt;/td&gt;
&lt;td&gt;Operators run it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;You handle scaling&lt;/td&gt;
&lt;td&gt;Network handles it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;You're liable for uptime&lt;/td&gt;
&lt;td&gt;Operators stake collateral&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Revenue = your pricing&lt;/td&gt;
&lt;td&gt;Revenue = share of operator fees&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Unlike traditional serverless platforms like AWS Lambda or Vercel Functions, Tangle blueprints give developers revenue sharing from every transaction their code processes. The tradeoff: you give up direct control in exchange for not running infrastructure. Whether that's good depends on what you're building.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the Blueprint SDK Compares
&lt;/h3&gt;

&lt;p&gt;The Blueprint SDK offers Rust-native development with integrated testing, AI support, and built-in payments.&lt;/p&gt;

&lt;p&gt;Here's how the Blueprint SDK stacks up against other platforms for building verifiable services:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Blueprint SDK&lt;/th&gt;
&lt;th&gt;Eigenlayer AVS&lt;/th&gt;
&lt;th&gt;Ritual&lt;/th&gt;
&lt;th&gt;Giza&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Primary language&lt;/td&gt;
&lt;td&gt;Rust&lt;/td&gt;
&lt;td&gt;Go/Solidity&lt;/td&gt;
&lt;td&gt;Python&lt;/td&gt;
&lt;td&gt;Cairo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Setup complexity&lt;/td&gt;
&lt;td&gt;Single CLI command&lt;/td&gt;
&lt;td&gt;Multi-contract deploy&lt;/td&gt;
&lt;td&gt;Docker + API&lt;/td&gt;
&lt;td&gt;Circuit compilation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Testing&lt;/td&gt;
&lt;td&gt;Integrated test harness&lt;/td&gt;
&lt;td&gt;Manual&lt;/td&gt;
&lt;td&gt;Manual&lt;/td&gt;
&lt;td&gt;Prover testing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI-native&lt;/td&gt;
&lt;td&gt;Yes (inference + sandbox)&lt;/td&gt;
&lt;td&gt;No (generic)&lt;/td&gt;
&lt;td&gt;Yes (inference only)&lt;/td&gt;
&lt;td&gt;Yes (ZK ML only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Payment&lt;/td&gt;
&lt;td&gt;Built-in x402&lt;/td&gt;
&lt;td&gt;Custom&lt;/td&gt;
&lt;td&gt;Custom&lt;/td&gt;
&lt;td&gt;Custom&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When Tangle Makes Sense
&lt;/h2&gt;

&lt;p&gt;Tangle fits services where trust, accountability, or multi-party coordination matter more than raw latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Good fits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Services where trust matters more than latency (custody, signing, verification)&lt;/li&gt;
&lt;li&gt;Compute you want others to run but need accountability (AI inference, code execution)&lt;/li&gt;
&lt;li&gt;Multi-party protocols that need distributed operators (MPC, threshold signatures)&lt;/li&gt;
&lt;li&gt;Infrastructure you'd rather not operate yourself&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Poor fits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sub-10ms latency requirements (blockchain coordination adds overhead)&lt;/li&gt;
&lt;li&gt;Simple CRUD apps (traditional infrastructure is simpler)&lt;/li&gt;
&lt;li&gt;Services where you need direct customer relationships (blueprints abstract this)&lt;/li&gt;
&lt;li&gt;Anything requiring proprietary infrastructure you control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building a standard web app, use Vercel. Seriously. Tangle is for services where decentralized operation and cryptoeconomic accountability provide value that justifies the complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The SDK
&lt;/h2&gt;

&lt;p&gt;Blueprint SDK is a Rust framework for building verifiable services on Tangle, using async job functions with typed extractors.&lt;/p&gt;

&lt;p&gt;The Blueprint SDK is Rust-based. Blueprint SDK supports Rust, with TypeScript and Python SDKs on the roadmap. If you're comfortable with Rust, the learning curve is manageable. If you're not, you'll be learning Rust and Tangle simultaneously, which is harder.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Concepts
&lt;/h3&gt;

&lt;p&gt;Jobs, routers, layers, and extractors are the four building blocks of every blueprint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jobs&lt;/strong&gt; are units of work. A &lt;strong&gt;job&lt;/strong&gt; is a single unit of work submitted to a blueprint, executed by one or more operators. A customer submits a job, operators execute it, results come back. Jobs have IDs, typed inputs, and typed outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Router&lt;/strong&gt; wires jobs to handlers. You define which function handles which job ID, and what protocol layer processes it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layers&lt;/strong&gt; add protocol-specific behavior. &lt;code&gt;TangleLayer&lt;/code&gt; handles Tangle EVM integration, including job routing and result submission.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extractors&lt;/strong&gt; parse job inputs. &lt;strong&gt;&lt;code&gt;TangleArg&amp;lt;T&amp;gt;&lt;/code&gt;&lt;/strong&gt; extracts ABI-encoded arguments from incoming job data. &lt;strong&gt;&lt;code&gt;TangleResult&amp;lt;T&amp;gt;&lt;/code&gt;&lt;/strong&gt; wraps return values for on-chain submission. Together, these extractors provide type-safe input parsing and output encoding without manual serialization.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Minimal Blueprint
&lt;/h3&gt;

&lt;p&gt;A working blueprint needs a Cargo.toml, one async function per job, and a router.&lt;/p&gt;

&lt;p&gt;First, your &lt;code&gt;Cargo.toml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[package]&lt;/span&gt;
&lt;span class="py"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"squaring-service"&lt;/span&gt;
&lt;span class="py"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.1.0"&lt;/span&gt;
&lt;span class="py"&gt;edition&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"2024"&lt;/span&gt;
&lt;span class="py"&gt;rust-version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"1.88"&lt;/span&gt;

&lt;span class="nn"&gt;[dependencies]&lt;/span&gt;
&lt;span class="py"&gt;blueprint-sdk&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.1.0-alpha.22"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;features&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"tangle"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="py"&gt;tokio&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;features&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"full"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then the blueprint itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;tangle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TangleLayer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;tangle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;extract&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;TangleResult&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="cd"&gt;/// Job 0: Square a number&lt;/span&gt;
&lt;span class="cd"&gt;///&lt;/span&gt;
&lt;span class="cd"&gt;/// The function receives ABI-encoded input via TangleArg&lt;/span&gt;
&lt;span class="cd"&gt;/// and returns ABI-encoded output via TangleResult.&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;square&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,)):&lt;/span&gt; &lt;span class="n"&gt;TangleArg&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;TangleResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;TangleResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cd"&gt;/// Job 1: Square with multi-operator verification&lt;/span&gt;
&lt;span class="cd"&gt;///&lt;/span&gt;
&lt;span class="cd"&gt;/// Same logic, but the job is configured to require&lt;/span&gt;
&lt;span class="cd"&gt;/// multiple operator results before completion.&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;verified_square&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,)):&lt;/span&gt; &lt;span class="n"&gt;TangleArg&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;TangleResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;TangleResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cd"&gt;/// Router wires jobs to the Tangle protocol layer&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;router&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Router&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nn"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;square&lt;/span&gt;&lt;span class="nf"&gt;.layer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TangleLayer&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="nf"&gt;.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;verified_square&lt;/span&gt;&lt;span class="nf"&gt;.layer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TangleLayer&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this shows:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jobs are plain async functions with extractors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TangleArg&amp;lt;(T,)&amp;gt;&lt;/code&gt; extracts typed input from ABI-encoded job data (primitive types are tuple-wrapped for ABI compatibility; structs defined with &lt;code&gt;sol!&lt;/code&gt; do not need wrapping)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TangleResult&amp;lt;T&amp;gt;&lt;/code&gt; wraps output for on-chain submission&lt;/li&gt;
&lt;li&gt;Router maps job IDs to handlers with protocol layers&lt;/li&gt;
&lt;li&gt;No macros required for basic functionality&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What the SDK Handles
&lt;/h3&gt;

&lt;p&gt;The SDK manages protocol communication, job routing, operator lifecycle, and fee distribution.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Protocol communication (you don't touch raw blockchain)&lt;/li&gt;
&lt;li&gt;Job routing and result submission&lt;/li&gt;
&lt;li&gt;Operator lifecycle management&lt;/li&gt;
&lt;li&gt;Fee distribution&lt;/li&gt;
&lt;li&gt;Event emission for indexing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What You Handle
&lt;/h3&gt;

&lt;p&gt;You write business logic, set operator requirements, and define pricing recommendations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business logic (the actual computation)&lt;/li&gt;
&lt;li&gt;Operator requirements (who can run your service)&lt;/li&gt;
&lt;li&gt;Pricing recommendations (though operators set final prices)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Verification Through Aggregation
&lt;/h2&gt;

&lt;p&gt;Verification is a protocol property, not application code: you configure how many operators must agree, and the aggregation service enforces it.&lt;/p&gt;

&lt;p&gt;The previous post mentioned verification. Here's how it actually works in the SDK:&lt;/p&gt;

&lt;p&gt;Verification isn't a function you write. It's a protocol property configured when you deploy. Jobs can require:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Single operator&lt;/strong&gt; (1 result completes the job)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-operator&lt;/strong&gt; (N results required before completion)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Threshold consensus&lt;/strong&gt; (M-of-N operators must agree)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Same job logic, different aggregation requirements&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;square&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,)):&lt;/span&gt; &lt;span class="n"&gt;TangleArg&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,)&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;TangleResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;TangleResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Configured at deployment:&lt;/span&gt;
&lt;span class="c1"&gt;// - Job 0: 1 operator required&lt;/span&gt;
&lt;span class="c1"&gt;// - Job 1: 2 operators required (verified)&lt;/span&gt;
&lt;span class="c1"&gt;// - Job 2: 3 operators required (consensus)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Tangle's aggregation service collects operator results and verifies BLS signatures before finalizing job completion.&lt;/strong&gt; The aggregation service collects results from operators, verifies BLS signatures, and only finalizes when the threshold is met. If operators submit different results, the protocol detects the disagreement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Slashing
&lt;/h3&gt;

&lt;p&gt;Slashing penalizes operators who disagree or fail to respond, enforced at the contract level.&lt;/p&gt;

&lt;p&gt;When operators disagree or fail to respond, the protocol can slash their stake. This is handled at the contract level, not in your Rust code. You configure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which behaviors trigger slashing&lt;/li&gt;
&lt;li&gt;How much stake is at risk&lt;/li&gt;
&lt;li&gt;Grace periods for operator response&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Local Development
&lt;/h2&gt;

&lt;p&gt;The local dev environment simulates the full Tangle network so you can iterate without touching testnet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;You need Rust 1.88+, Docker, and Node.js 18+ installed locally.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rust 1.88+ (stable, 2024 edition)&lt;/li&gt;
&lt;li&gt;Docker (for local network)&lt;/li&gt;
&lt;li&gt;Node.js 18+ (for tooling)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;Run four commands to install the CLI, scaffold a project, build, and start the local network.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install the cargo-tangle CLI (scaffolds, builds, tests, and deploys in one toolchain)&lt;/span&gt;
cargo &lt;span class="nb"&gt;install &lt;/span&gt;cargo-tangle &lt;span class="nt"&gt;--git&lt;/span&gt; https://github.com/tangle-network/blueprint

&lt;span class="c"&gt;# Create a new project&lt;/span&gt;
cargo tangle blueprint create &lt;span class="nt"&gt;--name&lt;/span&gt; my-service
&lt;span class="nb"&gt;cd &lt;/span&gt;my-service

&lt;span class="c"&gt;# Build&lt;/span&gt;
cargo build

&lt;span class="c"&gt;# Run locally against the Tangle protocol&lt;/span&gt;
cargo tangle blueprint run &lt;span class="nt"&gt;--protocol&lt;/span&gt; tangle
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The local environment simulates the full network: a Tangle node, test operators, and mock customers. You can test job submission and operator behavior without touching testnet. Notably, cargo-tangle scaffolds a new blueprint project in under 10 seconds, so you spend your time writing logic rather than wiring boilerplate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing
&lt;/h3&gt;

&lt;p&gt;Write standard Rust tests with SDK utilities for both unit and integration coverage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[cfg(test)]&lt;/span&gt;
&lt;span class="k"&gt;mod&lt;/span&gt; &lt;span class="n"&gt;tests&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;testing&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;utils&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;setup_log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nd"&gt;#[tokio::test]&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;test_square_correct&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setup_log&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

        &lt;span class="c1"&gt;// Direct function test (primitives are tuple-wrapped for ABI compatibility)&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;square&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;5u64&lt;/span&gt;&lt;span class="p"&gt;,)))&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nd"&gt;assert_eq!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nd"&gt;#[tokio::test]&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;test_square_overflow&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setup_log&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

        &lt;span class="c1"&gt;// Test edge cases&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;square&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;TangleArg&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nn"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;MAX&lt;/span&gt;&lt;span class="p"&gt;,)))&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="c1"&gt;// Depending on your requirements, this might panic or wrap&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For full integration testing with aggregation, the SDK provides testing utilities. The built-in test harness simulates multi-operator verification locally, so you can validate aggregation thresholds before deploying to testnet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_sdk&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;testing&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;utils&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;setup_log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;blueprint_tangle_aggregation_svc&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;
    &lt;span class="n"&gt;AggregationService&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ServiceConfig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SubmitSignatureRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nd"&gt;#[tokio::test]&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;test_aggregation_flow&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setup_log&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;AggregationService&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;ServiceConfig&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

    &lt;span class="c1"&gt;// Initialize task requiring 2 operator signatures&lt;/span&gt;
    &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="nf"&gt;.init_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;service_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;call_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;output_bytes&lt;/span&gt;&lt;span class="nf"&gt;.clone&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// total operators&lt;/span&gt;
        &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// threshold required&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;// Submit signatures from operators&lt;/span&gt;
    &lt;span class="c1"&gt;// Verify threshold behavior&lt;/span&gt;
    &lt;span class="c1"&gt;// Check aggregated result&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Debugging
&lt;/h3&gt;

&lt;p&gt;Use &lt;code&gt;cargo tangle blueprint debug&lt;/code&gt; and &lt;code&gt;cargo tangle blueprint jobs show&lt;/code&gt; to trace issues locally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Debug a running blueprint&lt;/span&gt;
cargo tangle blueprint debug

&lt;span class="c"&gt;# Check job status&lt;/span&gt;
cargo tangle blueprint &lt;span class="nb"&gt;jobs &lt;/span&gt;list
cargo tangle blueprint &lt;span class="nb"&gt;jobs &lt;/span&gt;show &amp;lt;job-id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testnet Deployment
&lt;/h2&gt;

&lt;p&gt;Testnet uses real Tangle infrastructure with test tokens so you can simulate production before going live.&lt;/p&gt;

&lt;p&gt;When local testing passes, deploy to testnet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Deploy to testnet&lt;/span&gt;
cargo tangle blueprint deploy &lt;span class="nt"&gt;--target&lt;/span&gt; tangle &lt;span class="nt"&gt;--network&lt;/span&gt; testnet

&lt;span class="c"&gt;# Your blueprint is now live at:&lt;/span&gt;
&lt;span class="c"&gt;# Blueprint ID: 0x...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keys are managed separately via the &lt;code&gt;cargo tangle key&lt;/code&gt; subcommand (generate, import, export, list).&lt;/p&gt;

&lt;p&gt;Testnet uses real Tangle infrastructure but test tokens. Operators can register (with test stake), and you can simulate real usage patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Production Deployment
&lt;/h2&gt;

&lt;p&gt;Production deployment registers your blueprint on mainnet, where real operators stake real collateral and real customers pay for your service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-Launch Checklist
&lt;/h3&gt;

&lt;p&gt;Verify tests, thresholds, slashing conditions, fees, and operator documentation before mainnet.&lt;/p&gt;

&lt;p&gt;Before mainnet:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] All tests pass (unit, integration, e2e)&lt;/li&gt;
&lt;li&gt;[ ] Aggregation thresholds match your security model&lt;/li&gt;
&lt;li&gt;[ ] Slashing conditions are well-defined&lt;/li&gt;
&lt;li&gt;[ ] Operator requirements match your needs&lt;/li&gt;
&lt;li&gt;[ ] Fee structure makes economic sense&lt;/li&gt;
&lt;li&gt;[ ] Documentation exists for operators&lt;/li&gt;
&lt;li&gt;[ ] You've tested with real operators on testnet&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deployment
&lt;/h3&gt;

&lt;p&gt;Deploy to mainnet with one command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo tangle blueprint deploy &lt;span class="nt"&gt;--target&lt;/span&gt; tangle &lt;span class="nt"&gt;--network&lt;/span&gt; mainnet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  After Launch
&lt;/h3&gt;

&lt;p&gt;Operators discover, evaluate, register, and begin processing jobs on your live blueprint.&lt;/p&gt;

&lt;p&gt;Your blueprint is now live. What happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Operators discover it&lt;/strong&gt; via the registry&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operators evaluate&lt;/strong&gt; if it's worth running&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operators register&lt;/strong&gt; by meeting requirements and staking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customers find&lt;/strong&gt; your service&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jobs flow&lt;/strong&gt; through registered operators&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You earn&lt;/strong&gt; a share of every transaction&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What Can Go Wrong
&lt;/h2&gt;

&lt;p&gt;Common failure modes include insufficient operators, collusion, economic attacks, and early SDK bugs.&lt;/p&gt;

&lt;p&gt;Being honest about failure modes:&lt;/p&gt;

&lt;h3&gt;
  
  
  No Operators
&lt;/h3&gt;

&lt;p&gt;Zero operators registered means zero service availability.&lt;/p&gt;

&lt;p&gt;If your blueprint isn't profitable enough, operators won't run it. Zero operators = zero service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mitigation:&lt;/strong&gt; Set realistic fee structures. Start with guaranteed operators (run some yourself initially). Make operator setup easy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Operator Collusion
&lt;/h3&gt;

&lt;p&gt;Colluding operators can defeat verification, which is why operator diversity matters.&lt;/p&gt;

&lt;p&gt;If all operators collude, verification fails. This is why operator diversity matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mitigation:&lt;/strong&gt; Require geographic distribution, different staking sources, TEE attestation from different manufacturers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Economic Attacks
&lt;/h3&gt;

&lt;p&gt;Rational attackers will exploit any gap where value at risk exceeds total operator stake.&lt;/p&gt;

&lt;p&gt;If the value protected exceeds total operator stake, rational attackers will attack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mitigation:&lt;/strong&gt; Match stake requirements to value at risk.&lt;/p&gt;

&lt;h3&gt;
  
  
  SDK Bugs
&lt;/h3&gt;

&lt;p&gt;Early adopters should expect SDK bugs and start with lower-value services.&lt;/p&gt;

&lt;p&gt;The SDK is software. It has bugs. Early adopters will find them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mitigation:&lt;/strong&gt; Start with lower-value services. Monitor closely. Have incident response ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Blueprint Examples
&lt;/h2&gt;

&lt;p&gt;Production blueprints include FROST threshold signatures and cross-chain verification infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Threshold Signatures (FROST)
&lt;/h3&gt;

&lt;p&gt;FROST enables distributed Schnorr signing with 5-of-7 operator threshold consensus.&lt;/p&gt;

&lt;p&gt;A production blueprint for distributed Schnorr signatures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Job:&lt;/strong&gt; Sign a message with threshold key&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operators:&lt;/strong&gt; 5 of 7 must participate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verification:&lt;/strong&gt; Signature verifies against known public key&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slashing:&lt;/strong&gt; Invalid signature or non-participation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cross-Chain Infrastructure
&lt;/h3&gt;

&lt;p&gt;Blueprints power cross-chain message verification for LayerZero DVN and Hyperlane.&lt;/p&gt;

&lt;p&gt;Blueprints powering LayerZero DVN and Hyperlane:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Job:&lt;/strong&gt; Verify cross-chain message&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operators:&lt;/strong&gt; Multiple independent verifiers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verification:&lt;/strong&gt; Consensus on message validity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slashing:&lt;/strong&gt; Incorrect verification&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Missing
&lt;/h2&gt;

&lt;p&gt;The SDK works but IDE support, error messages, and documentation still have rough edges typical of early-stage infrastructure.&lt;/p&gt;

&lt;p&gt;Honest gaps in the current developer experience:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IDE support is minimal.&lt;/strong&gt; No VSCode extension with autocomplete, no inline documentation. You're reading docs and source code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error messages could be better.&lt;/strong&gt; Some SDK errors are cryptic. We're improving them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation has gaps.&lt;/strong&gt; Some advanced features are documented only in code comments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tooling is young.&lt;/strong&gt; The CLI works but isn't polished. Expect rough edges.&lt;/p&gt;

&lt;p&gt;We're a small team shipping fast. The core functionality works. The developer experience is improving but not yet where we want it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Install the cargo-tangle CLI, scaffold a project, and have a local blueprint running in under 10 minutes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Read the docs:&lt;/strong&gt; &lt;a href="https://docs.tangle.tools" rel="noopener noreferrer"&gt;docs.tangle.tools&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clone an example:&lt;/strong&gt; &lt;a href="https://github.com/tangle-network/blueprint/tree/main/examples" rel="noopener noreferrer"&gt;github.com/tangle-network/blueprint&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Join Discord:&lt;/strong&gt; &lt;a href="https://discord.gg/cv8EfJu3Tn" rel="noopener noreferrer"&gt;discord.gg/cv8EfJu3Tn&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start small:&lt;/strong&gt; Build something simple first. Learn the patterns.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The best way to understand Tangle is to build something on it. The second-best way is to ask questions in Discord. We're small enough that you'll talk to people who wrote the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How do I build a blueprint on Tangle?&lt;/strong&gt;&lt;br&gt;
Install the cargo-tangle CLI, run &lt;code&gt;cargo tangle blueprint create --name my-service&lt;/code&gt;, write your async job functions in Rust, and wire them into a router with &lt;code&gt;TangleLayer&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is TangleArg and TangleResult?&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;TangleArg&amp;lt;T&amp;gt;&lt;/code&gt; is an extractor that parses ABI-encoded job input into a typed Rust value. &lt;code&gt;TangleResult&amp;lt;T&amp;gt;&lt;/code&gt; wraps your return value for on-chain submission.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I deploy a blueprint?&lt;/strong&gt;&lt;br&gt;
Build locally, test with &lt;code&gt;cargo tangle blueprint run --protocol tangle&lt;/code&gt;, deploy to testnet with &lt;code&gt;cargo tangle blueprint deploy --target tangle --network testnet&lt;/code&gt;, and promote to mainnet with &lt;code&gt;--network mainnet&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does Tangle handle multi-operator jobs?&lt;/strong&gt;&lt;br&gt;
You configure how many operators must submit matching results at the contract level. The aggregation service collects results, verifies BLS signatures, and finalizes only when the threshold is met.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What testing tools does Blueprint SDK provide?&lt;/strong&gt;&lt;br&gt;
The SDK provides unit test utilities via &lt;code&gt;blueprint_sdk::testing&lt;/code&gt;, local blueprint execution with &lt;code&gt;cargo tangle blueprint run&lt;/code&gt;, integration test helpers for aggregation flows, and debugging with &lt;code&gt;cargo tangle blueprint debug&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What programming language is required for Tangle blueprints?&lt;/strong&gt;&lt;br&gt;
Blueprints are written in Rust using the Blueprint SDK (version 0.1.0-alpha.22+), requiring Rust 1.88+ with the 2024 edition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do blueprint developers earn revenue?&lt;/strong&gt;&lt;br&gt;
Blueprint developers receive a configurable share of every transaction processed by their blueprint, paid automatically by the service contract with no invoicing or manual settlement.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;The next post covers AI services specifically: how to build verified inference and sandboxed code execution on Tangle.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://discord.gg/cv8EfJu3Tn" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/tangle_network" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.tangle.tools" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tangle-network/blueprint" rel="noopener noreferrer"&gt;Blueprint SDK&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tangle-network/blueprint/tree/main/examples" rel="noopener noreferrer"&gt;Example Blueprints&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>blockchain</category>
      <category>rust</category>
      <category>web3</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
