AI isn't coming to your software teams. It's already there. Developers are running local models, pulling AI-optimized images, connecting autonomous agents to codebases and cloud APIs, and integrating AI tools into every stage of the development lifecycle. The question for security, platform, and executive leadership isn't whether to allow it. It's whether you govern it or pretend it isn't happening.
The risks are well-documented: unpredictable inference costs, unvetted images and tools entering the supply chain, autonomous agents with write access to production systems, and no audit trail across any of it. Without a deliberate architecture, this becomes Shadow AI.
Docker's recent AI-focused releases address these challenges directly. Here's how they map to the concerns platform and security teams are navigating right now.
The Challenges (and What Addresses Them)
1. "AI inference costs are unpredictable and growing fast."
Docker Model Runner + Remocal/MVM + Docker Offload
Docker's "Remocal" approach pairs local-first development with Minimum Viable Models (MVMs), the smallest models that get the job done. (Docker, "Remocal + Minimum Viable Models") Docker Model Runner executes these locally through standard APIs (OpenAI-compatible and Ollama-compatible) with three inference engines. (Docker Docs, "Model Runner") Developers iterate locally at zero marginal token cost and only hit cloud APIs when they need to.
When local hardware isn't enough, Docker Offload extends the same workflow to cloud infrastructure (L4 GPU currently in beta) without changing a single command. (Docker, "Docker Offload") The cost lever is clear: local by default, cloud when justified.
2. "Autonomous agents with write access terrify our security team."
Docker Sandboxes
This is the answer to the "but what if the agent goes rogue" conversation. Each sandbox runs in a dedicated microVM with its own kernel, filesystem, and private Docker daemon. The agent can build, install, test, and run containers, all without any access to the host environment. Only the project workspace is mounted. When you tear down the sandbox, everything inside it is deleted. (Docker Docs, "Sandboxes Architecture")
This is hypervisor-level isolation, not container-level. Sandboxes already support Claude Code, Codex, Copilot, Gemini, cagent, Kiro, OpenCode, and custom shell. (Docker Docs, "Sandbox Agents") For standard (non-agent) containers, Enhanced Container Isolation (ECI) provides complementary protection using Linux user namespaces. (Docker Docs, "Enhanced Container Isolation")
3. "Developers are connecting agents to GitHub, Jira, and databases with no oversight."
MCP Gateway + MCP Catalog
The open-source MCP Gateway runs every tool server in an isolated container with restricted privileges, network controls, and resource limits. It manages credential injection (so API keys don't live in developer configs), and it includes built-in logging and call tracing. Every tool invocation is recorded. (Docker Docs, "MCP Gateway"; Docker, "MCP Gateway: Secure Infrastructure for Agentic AI")
The MCP Catalog provides 300+ curated, verified tool servers packaged as Docker images. Organizations can create custom catalogs scoped to their approved servers, turning "find a random MCP server on the internet" into "pick from the approved list." Docker is also applying automated trust measures including structured review of incoming changes. (Docker Docs, "MCP Catalog")
4. "We can't control what our developers are pulling and running."
Docker Hardened Images + Registry Access Management + Image Access Management
Docker Hardened Images (DHI) are distroless, minimal base images stripped of shells, package managers, and unnecessary components. Every image ships with an SBOM, SLSA Build Level 3 provenance, and transparent CVE data. (Docker, "Introducing Docker Hardened Images") DHI is now free and open source (Apache 2.0) with over 1,000 images available, which removes the "it's too expensive to do the right thing" objection. (Docker Press Release, December 17, 2025)
Registry Access Management (RAM) provides DNS-level filtering to control which registries developers can access through Docker Desktop. (Docker Docs, "Registry Access Management") Image Access Management adds controls over which types of Docker Hub images are permitted. (Docker Docs, "Image Access Management") Together, they let your platform team enforce approved sources without slowing anyone down.
This isn't just for application images. Docker is actively extending hardening to MCP server images, the tools AI agents use to interact with external systems. (Docker, "Hardened Images for Everyone")
5. "We need an audit trail and we need it yesterday."
Docker Scout + MCP Gateway logging
Docker Scout provides continuous SBOM and vulnerability analysis across container images in the stack: DHI base images, application images, and MCP server images. (Docker Docs, "Docker Scout") MCP Gateway logging captures tool-call details with support for signature verification (checking image provenance before use) and secret blocking (scanning payloads for exposed credentials). (Docker, "MCP Gateway: Secure Infrastructure for Agentic AI"; GitHub, docker/mcp-gateway)
Together, these answer the three questions auditors will ask: What's running? Is it safe? What did the agent do?
6. "We can't enforce any of this without knowing who's who."
SSO + SCIM
Identity is the layer that makes all the others enforceable. RAM policies only activate when developers sign in with organization credentials. Image Access Management is scoped to authenticated users. Audit trails are meaningless without verified identities attached.
SSO authenticates via your existing identity provider. SCIM automates provisioning and deprovisioning. When someone joins or leaves, their Docker access updates automatically. (Docker Docs, "Single Sign-On")
What This Looks Like Composed
| Outcome | Docker Tool(s) | Why It Matters |
|---|---|---|
| Lower AI spend + faster iteration | Docker Model Runner + Remocal/MVM + Docker Offload | Run more of the dev loop locally to reduce paid API calls and latency during iteration. |
| Safe autonomy for agents | Docker Sandboxes | MicroVM isolation + fast reset reduces host risk and cleanup time when agents misbehave. |
| Governed tool access | Docker's MCP Catalog + Toolkit (including MCP Gateway) | Centralize tool servers, apply restrictions, and capture logs/traces for visibility. |
| Stronger supply-chain posture | Docker Hardened Images + RAM + Image Access Management | Standardize hardened bases and prevent pulling from unapproved sources. |
| Fewer vuln/audit fire drills | Docker Scout + MCP Gateway logging | Continuous SBOM and CVE visibility + tool-call logs improves triage and audit readiness. |
| Identity-based policy enforcement | SSO + SCIM | Tie governance controls and audit trails to verified, managed identities across every layer. |
| Faster CI + hardened non-agent containers | Docker Build Cloud + Enhanced Container Isolation (ECI) | Reduce build bottlenecks and strengthen isolation for everyday containers. |
The Seven-Layer Architecture
For teams ready to go deeper, here is a reference architecture that weaves these capabilities into seven concurrent layers to solve the problems mentioned above.
| Layer | Docker Tool(s) | What It Does |
|---|---|---|
| Foundation | Docker Hardened Images + RAM + Image Access Management | Hardened/minimal base images; registry allowlisting and image-type controls |
| Definition | cagent | Declarative YAML agent configs with root/sub-agent orchestration |
| Inference | Docker Model Runner + Remocal/MVM | Local-first model execution with Minimum Viable Models; Docker Offload for cloud burst |
| Execution | Docker Sandboxes | MicroVM isolation with a private Docker daemon per agent |
| External Access | MCP Gateway + MCP Catalog | Governed, containerized tool servers with credential injection and call tracing |
| Observability | Docker Scout + MCP Gateway logging | Continuous SBOM/CVE analysis; tool-call audit trails |
| Identity | SSO + SCIM | Authentication, user provisioning, and identity-based policy enforcement |
For the full architecture walkthrough, including how each layer connects, read the companion overview: From Shadow AI to Enterprise Asset: A Seven-Layer Reference Architecture for Docker's AI Stack.
How I Wrote This Article
This post was produced through a multi-stage process combining human research and writing with AI tools. I spent a week studying Docker's AI-focused releases, built the architectural framework, then used AI tools (Gemini, ChatGPT, and Claude) iteratively for drafting, fact-checking, and structural review. For the full methodology, see the "How I Wrote This" section of my deep dive into these concepts: From Shadow AI to Enterprise Asset: A Seven-Layer Reference Architecture for Docker's AI Stack - The Deep Dive.
Top comments (0)