DEV Community

Cover image for Your AI Agent Dockerfile Might Be Leaking Secrets
Raju Dandigam
Raju Dandigam

Posted on

Your AI Agent Dockerfile Might Be Leaking Secrets

Introduction

Dockerfiles are often treated as boring infrastructure files. We copy a working example, adjust a few commands, install dependencies, and move on. That is understandable, but it is also where many security mistakes begin.

This risk becomes more important when we build AI-enabled Node.js applications. A modern AI app may depend on private npm packages, internal SDKs, GitHub repositories, model provider credentials, MCP server configuration, or private build-time assets. If we are not careful, tokens used during the Docker build can accidentally become part of the image history, image layers, build logs, or final runtime environment.

Docker Build Secrets solve one specific problem: passing sensitive values to the build process without baking them into the final image. Docker's documentation is clear that build arguments and environment variables are not appropriate for secrets because they can persist in the final image, while secret mounts and SSH mounts are designed for securely exposing sensitive data only during a build step.

This article focuses on the practical Node.js and AI-agent case: installing private packages, accessing private repositories, and avoiding the common mistake of treating API keys as normal Dockerfile variables.

The Common Mistake

A common Dockerfile pattern looks like this:

FROM node:22-slim

WORKDIR /app

ARG NPM_TOKEN
ENV NPM_TOKEN=$NPM_TOKEN

COPY package*.json ./

RUN npm config set //registry.npmjs.org/:_authToken=$NPM_TOKEN \
  && npm ci

COPY . .

RUN npm run build

CMD ["node", "dist/index.js"]
Enter fullscreen mode Exit fullscreen mode

At first, this looks reasonable. The build needs an npm token to install private packages, so the token is passed as an argument and used during npm ci.

The problem is that ARG and ENV were not designed for secrets. The value may appear in metadata, logs, or intermediate layers depending on how the image is built and inspected. Even if the final container runs fine, the image may now carry more information than intended.

This gets worse when developers use the same pattern for AI credentials:

ARG OPENAI_API_KEY
ENV OPENAI_API_KEY=$OPENAI_API_KEY
Enter fullscreen mode Exit fullscreen mode

That is usually the wrong place for a model provider key. An OpenAI key, Anthropic key, GitHub token, or MCP server credential should normally be a runtime secret, not a build-time value. The build process usually does not need it. The running application does.

Why AI Apps Make This Easier to Get Wrong

AI applications often blur the boundary between build time and runtime. A regular Node.js API may only need dependencies during build and database credentials during runtime. An AI-agent application may also need tool credentials, private package access, GitHub access, prompt assets, evaluation data, and model provider keys.

That complexity leads to shortcuts. A developer may add a token to the Dockerfile just to make the build pass. An AI coding assistant may generate a Dockerfile that uses ARG because it looks simple. A CI workflow may pass secrets directly into build arguments because it is easy to wire up.

The safer habit is to ask one question before adding any secret to a Docker build: does this value need to exist while building the image, or only when running the container?

If the secret is needed to install a private npm package, clone a private repository, or download a private build asset, it may be a build secret. If the secret is needed to call a model provider, connect to a database, access an MCP tool, or call an external API at runtime, it should be passed when the container runs.

The Safer Pattern: Build Secrets

Docker BuildKit supports secret mounts. A secret mount exposes a value as a temporary file during a specific RUN instruction. By default, Docker mounts secrets under /run/secrets, and the secret is not automatically copied into the final image unless your command explicitly writes it somewhere permanent. Docker describes this as a two-step process: pass the secret into docker build, then consume it inside the Dockerfile using a secret mount.

Here is a safer version for installing private npm packages:

# syntax=docker/dockerfile:1.7

FROM node:22-slim AS build

WORKDIR /app

COPY package*.json ./

RUN --mount=type=secret,id=npm_token \
  npm config set //registry.npmjs.org/:_authToken="$(cat /run/secrets/npm_token)" \
  && npm ci \
  && npm config delete //registry.npmjs.org/:_authToken

COPY . .

RUN npm run build

FROM node:22-slim AS runtime

WORKDIR /app

ENV NODE_ENV=production

COPY --from=build /app/dist ./dist
COPY --from=build /app/package*.json ./

RUN npm ci --omit=dev

CMD ["node", "dist/index.js"]
Enter fullscreen mode Exit fullscreen mode

Then build the image like this:

docker build \
  --secret id=npm_token,env=NPM_TOKEN \
  -t ai-agent-api:local .
Enter fullscreen mode Exit fullscreen mode

In this example, the npm token is available only during the RUN instruction that installs dependencies. It is not declared with ARG, not promoted to ENV, and not needed in the runtime image.

Architecture in One View

The important distinction is that build secrets and runtime secrets solve different problems. Build secrets help the image build safely. Runtime secrets help the container run safely.

architecture flow

GitHub Actions Example

Docker also documents secret mounts and SSH mounts for GitHub Actions builds. Secret mounts expose values as files during the build container step, while SSH mounts expose SSH agent sockets or keys for operations such as cloning private repositories.

Here is a simple GitHub Actions workflow using Docker's Build Push Action:

name: Build Docker Image

on:
  pull_request:
  push:
    branches:
      - main

jobs:
  docker-build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - uses: docker/setup-buildx-action@v3

      - uses: docker/build-push-action@v6
        with:
          context: .
          push: false
          tags: ai-agent-api:ci
          secrets: |
            npm_token=${{ secrets.NPM_TOKEN }}
Enter fullscreen mode Exit fullscreen mode

The matching Dockerfile can read the secret as /run/secrets/npm_token:

RUN --mount=type=secret,id=npm_token \
  npm config set //registry.npmjs.org/:_authToken="$(cat /run/secrets/npm_token)" \
  && npm ci \
  && npm config delete //registry.npmjs.org/:_authToken
Enter fullscreen mode Exit fullscreen mode

This is much safer than passing the npm token as a build argument.

What About SSH Keys?

Sometimes the build needs to pull code from a private Git repository. For that, SSH mounts are usually a better fit than copying a private key into the image:

# syntax=docker/dockerfile:1.7

FROM node:22-slim AS build

WORKDIR /app

RUN apt-get update \
  && apt-get install -y --no-install-recommends git openssh-client \
  && rm -rf /var/lib/apt/lists/*

RUN --mount=type=ssh \
  git clone git@github.com:your-org/private-agent-tools.git tools
Enter fullscreen mode Exit fullscreen mode

Build it with SSH forwarding enabled:

docker build --ssh default -t ai-agent-api:local .
Enter fullscreen mode Exit fullscreen mode

The SSH key is not copied into the image. The build step gets temporary access through the SSH mount.

What Should Not Be a Build Secret

Not every secret belongs in docker build --secret.

Model provider keys are usually runtime secrets. If your Node.js application calls a model API when it runs, pass the key at runtime:

docker run \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  ai-agent-api:local
Enter fullscreen mode Exit fullscreen mode

For local development, Docker Compose can read values from your environment or an ignored .env file:

services:
  app:
    image: ai-agent-api:local
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}
      MCP_GITHUB_TOKEN: ${MCP_GITHUB_TOKEN}
Enter fullscreen mode Exit fullscreen mode

For production, use your platform's secret manager. That may be AWS Secrets Manager, Kubernetes Secrets, Docker Swarm secrets, GitHub environment secrets, or another managed secret store. The key idea is the same: runtime credentials should be provided to the running container, not baked into the image.

A Simple Checklist for Node.js AI Apps

Before committing a Dockerfile for an AI application, review it with these questions:

  • Does the Dockerfile use ARG or ENV for anything that looks like a token, key, password, or credential?
  • Does the build need the secret, or does only the running app need it?
  • Are private npm tokens passed through --secret instead of ARG?
  • Are SSH keys forwarded through --ssh instead of copied?
  • Does the final runtime image avoid .npmrc, private keys, local .env files, and unnecessary build artifacts?
  • Is .dockerignore excluding files such as .env, .npmrc, .git, logs, coverage output, and local test data?

A basic .dockerignore should usually include these files:

.env
.env.*
.npmrc
.git
node_modules
coverage
dist
*.log
Enter fullscreen mode Exit fullscreen mode

Be careful with dist if your build process expects it from the host. In most production Docker builds, the image should build its own dist output inside the container.

How to Verify You Did Not Leak Something Obvious

You can inspect image history:

docker history ai-agent-api:local
Enter fullscreen mode Exit fullscreen mode

You can also run a quick scan inside the image filesystem:

docker run --rm ai-agent-api:local sh -c "find /app -type f | xargs grep -i 'sk-' || true"
Enter fullscreen mode Exit fullscreen mode

That command is not a full security scanner, but it can catch obvious mistakes. For serious workflows, use dedicated secret scanning and image scanning tools in CI.

This is not theoretical. A 2023 internet-wide study of container images found that exposed secrets in container images are a real issue, including private keys and API secrets discovered across public and private registries.

Conclusion

Docker Build Secrets are not complicated, but they require a clear mental model.

Use build secrets when the build process needs temporary access to sensitive data, such as private npm packages or private source repositories. Use runtime secrets when the running application needs credentials, such as OpenAI keys, GitHub tokens, database passwords, or MCP server credentials.

For AI-agent applications, this distinction matters even more. Agents often connect to powerful tools and sensitive systems. A leaked token can expose private repositories, model usage, customer data, internal APIs, or deployment workflows.

The safer pattern is simple:

  • Do not put secrets in ARG
  • Do not promote them to ENV inside the Dockerfile
  • Do not copy .env or .npmrc into the image
  • Use RUN --mount=type=secret for build-time secrets
  • Use --mount=type=ssh for private Git access
  • Pass runtime credentials through your runtime environment or secret manager

Your Dockerfile is part of your application's security boundary. Treat it that way, especially when the application is powered by AI and connected to real tools.

Top comments (0)