The threat to the software supply chain has always been there—what has changed is the shape of the vulnerability. We spent the last decade securing deterministic code, scanning for known CVEs, and locking down dependencies. Now, as organizations operationalize AI agents, the attack surface is silently shifting. The question is no longer whether we can scale these new workloads, but whether we can cryptographically verify a probabilistic, opaque model before it is allowed to execute.
The Reality Check in Amsterdam
If you spent any time walking the halls of KubeCon + CloudNativeCon EU 2026 this past March, you likely noticed a distinct shift in the security discourse.
For the past few years, Kubernetes security has focused heavily on shifting left: scanning container images, managing RBAC, and isolating workloads. But as the ecosystem industrializes large language models (LLMs) and agentic systems, traditional code vulnerability scanning is no longer enough.
The harsh reality is that AI models are probabilistic black boxes. A traditional CVE scanner cannot detect poisoned model weights, manipulated training data pipelines, or a subtle prompt injection vulnerability embedded deep within an agent’s toolset.
As we move from deterministic code to probabilistic agents, the security perimeter is shifting entirely. Provenance is the new perimeter.
If you cannot cryptographically prove exactly where an AI model came from, how it was trained, and what permissions its associated agent holds, you are not operating a secure platform. You are simply automating a massive liability.
The Forcing Function: The Cyber Resilience Act (CRA)
This shift in thinking isn't just driven by architectural purity; it is being forced by regulatory reality.
Looming over every security conversation in Amsterdam was the European Union’s Cyber Resilience Act (CRA). By September 11, 2026, mandatory vulnerability reporting and stringent compliance standards become active for manufacturers and open-source stewards alike (including foundations like the CNCF).
The CRA changes how open-source software is maintained and deployed. Generating Software Bill of Materials (SBOMs) is no longer an optional best practice—it is a legal requirement.
But how do you generate a Bill of Materials for a 70-billion parameter neural network?
This is where the concept of the aiBOM (AI Bill of Materials) transitions from theory to necessity. An aiBOM tracks the lineage of a model, detailing its architecture, the datasets used for training, licensing, and known safety evaluations.
At KubeCon, it became clear that enterprises will soon refuse to deploy AI workloads that lack a cryptographically signed aiBOM. The risk is simply too high.
The Architecture of Trust: How Cloud-Native is Adapting
The most encouraging takeaway from KubeCon 2026 is that the cloud-native ecosystem is not trying to invent a completely new security paradigm for AI. Instead, it is actively adapting the battle-tested container security stack to handle machine learning artifacts.
Here is what the emerging architecture of trust looks like for agentic supply chains:
01. Packaging: CNCF ModelPack
Historically, AI models have been distributed through fragmented, proprietary channels or raw object storage, making them notoriously difficult for standard CI/CD pipelines to manage.
The CNCF’s ModelPack project is solving this by standardizing the packaging and distribution of AI/ML models as OCI-compliant (Open Container Initiative) artifacts. By treating a massive LLM exactly like a standard Docker container image, platform teams can suddenly use their existing image registries, caching layers, and security scanners to handle AI infrastructure.
02. Attestation: Sigstore and SLSA
Once a model is packaged, its provenance must be verified. Just as developers use Sigstore (specifically Cosign) to cryptographically sign container images, the ecosystem is extending this to sign AI models.
By mapping AI pipelines to SLSA (Supply Chain Levels for Software Artifacts) frameworks and using tools like in-toto to generate attestations, platform teams can mathematically prove that a model was not tampered with between the training cluster and the production inference server.
03. Enforcement: Kyverno and OPA Gatekeeper
Attestations mean nothing without enforcement.
This is where Kubernetes admission controllers step in. Projects like Kyverno (which officially reached Graduated status during KubeCon) and OPA Gatekeeper act as the ultimate bouncers at the door of your cluster.
The emerging operational pattern is strict: if a deployment manifest attempts to spin up an AI agent, the admission controller intercepts it. It checks the OCI registry for the model, verifies the Sigstore cryptographic signature, and validates the attached aiBOM. If any of these checks fail—or if the model is unsigned—the deployment is blocked before a single GPU cycle is wasted.
The Next Frontier: Governing Agents via MCP
While securing the model weights is the first step, governing the actions of the agents using those models is the true frontier.
This was the central focus of the CNCF’s inaugural Agentics Day, a massive half-day co-located event in Amsterdam dedicated entirely to AI agents and the Model Context Protocol (MCP).
The consensus on the ground was clear: deploying agents is now a solved infrastructure problem. The hard part is authorization.
When an agent hallucinates, what is its blast radius? If an agent is granted access to a database via an MCP tool, how do we ensure it doesn't execute destructive commands?
The solutions discussed heavily involved Sandbox Operators—enabling session-aware, isolated execution environments within Kubernetes. Rather than giving an agent direct access to infrastructure, the agent requests an action, and the Kubernetes control plane executes that action within a tightly governed, ephemeral sandbox.
The North Star
We are entering an era where infrastructure is simultaneously becoming more autonomous and more heavily regulated.
The integration of ModelPack, Sigstore, Kyverno, and MCP represents the maturity of the cloud-native AI stack. We are finally moving past the artisanal, experimental phase of machine learning and treating AI like standard, auditable software.
As the September 2026 CRA deadlines approach, platform teams need to ask themselves a fundamental question:
Do we know exactly what our AI is executing, where it came from, and how to prove it?
If the answer is no, it is time to start building your provenance perimeter.
References & Resources
To explore the frameworks, regulations, and open-source projects shaping the agentic supply chain discussed in this article, refer to the following resources:
Regulatory & Standards
- European Cyber Resilience Act (CRA) – Official documentation on the EU’s upcoming mandatory cybersecurity requirements for hardware and software products.
- SLSA (Supply-chain Levels for Software Artifacts) – A security framework providing a checklist of standards and controls to prevent tampering, improve integrity, and secure packages and infrastructure.
- Software/AI Bill of Materials (SBOM/aiBOM) – CISA's official overview of SBOMs and their foundational role in software transparency and supply chain security.
Cloud-Native Security Tooling
- Sigstore – A standard for signing, verifying, and protecting software, making cryptographic signing of container images and ML artifacts accessible.
- in-toto – A framework to secure the integrity of software supply chains by cryptographically ensuring that end-to-end policies are verified.
- Kyverno – A Kubernetes-native policy engine designed for declarative policy management and admission control.
- OPA Gatekeeper – A customizable admission webhook for Kubernetes that enforces policies executed by the Open Policy Agent.
AI & Agentic Protocols
Model Context Protocol (MCP) – An open standard that enables developers to build secure, two-way connections between AI agents/models and external data sources or infrastructure tools.
Cloud Native Computing Foundation (CNCF) – The open-source hub hosting KubeCon and driving the standardization of cloud-native AI and security patterns.
This article draws from sessions and discussions at KubeCon + CloudNativeCon EU 2026, including Agentics Day, Open Source SecurityCon, and contributions from the CNCF TAG Security community.
By Soumia, a developer advocate focused on making complex infrastructure legible — through writing, speaking, and helping technical and non-technical audiences find common ground. I work at the intersection of cloud-native systems, AI, and editorial craft. — LinkedIn · Portfolio
Top comments (0)