🚀 Executive Summary
TL;DR: Next.js Docker builds often suffer from slow dependency installation due to improper Docker layer caching, particularly with pnpm. Optimizing Dockerfiles through multi-stage builds, or a quick switch to Bun, can drastically reduce build times by up to 66% by preserving cached dependency layers. This prevents unnecessary re-installation of packages on every code change, saving CI/CD costs and developer time.
🎯 Key Takeaways
- Naive Dockerfiles invalidate cache layers by copying the entire project context (
COPY . .) before dependency installation, forcing repeatedpnpm installruns even for minor code changes. - Switching to Bun can provide an immediate, significant reduction in Next.js Docker build times due to its inherent speed and potentially less sensitive caching behavior compared to a poorly configured pnpm setup.
- Multi-stage Docker builds are the recommended, package-manager agnostic solution, separating
package.jsonand lock file copying from application code to create stable, cached dependency layers that only rebuild when dependencies change. - For advanced optimization, Docker BuildKit’s cache mounts can provide persistent pnpm store caching, ideal for large monorepos or performance-critical pipelines requiring maximum speed.
SEO Summary: Slash your Next.js Docker build times by up to 66% by optimizing your dependency installation strategy. Learn why your CI/CD pipeline is slow and discover three practical fixes, from a quick package manager swap to mastering multi-stage builds for long-term performance.
My CI Pipeline Was Bleeding Money: How We Slashed Our Next.js Docker Build Times by 66%
I still remember the night vividly. It was 10 PM, we had a critical hotfix for our main e-commerce platform, and the CI/CD pipeline was absolutely crawling. Every time a developer pushed a one-line CSS change, the whole 15-minute build process for our Next.js container would kick off from scratch. We were burning through runner minutes like they were free, our lead dev was getting increasingly frantic on Slack, and I was staring at the build logs on ci-runner-pool-03, watching the same pnpm fetch step repeat itself for the thousandth time. It’s a special kind of DevOps pain to know that 99% of the work your pipeline is doing is completely unnecessary, and it’s all because of one poorly understood command in a Dockerfile.
So, Why Is My Build So Slow? It’s All About The Cache.
I see this all the time. A team adopts a modern package manager like pnpm for its efficiency and disk space savings on local machines, which is great. But then they throw it into a naive Dockerfile, and all those benefits go right out the window. The core villain here isn’t pnpm itself; it’s how Docker builds images in layers and how we often break the caching mechanism.
Your typical, un-optimized Dockerfile probably has a line like this:
# Copy the entire project context into the container
COPY . .
# Install dependencies
RUN pnpm install
# Build the app
RUN pnpm build
Here’s the problem: Docker’s layer cache is invalidated if the files in that layer change. The COPY . . command brings in everything—your source code, your package.json, your README, everything. So, when a developer changes a single line in app/page.tsx, the COPY layer is invalidated. And because the RUN pnpm install command comes *after* it, its cache is also invalidated. The runner then has to re-fetch and re-install every single dependency, every single time. Ouch.
A Note on pnpm vs. Other Managers: This problem is particularly painful with pnpm because its symlink-based approach (
node\_modules/.pnpm) can be more complex to cache correctly inside a standard Docker build compared to the flatter structures of npm or Yarn. This is why a simple swap can sometimes yield such dramatic results, but it’s often just masking the underlying structural issue in the Dockerfile.
Three Ways to Fix It: From Quick Hacks to Permanent Solutions
Okay, enough theory. Let’s get our hands dirty. Here are three distinct strategies I’ve used to tackle this, ranging from a quick tactical win to a robust, long-term architectural fix.
Fix #1: The Quick Fix (The ‘Bun’ Method)
This is the fix that sparked the original Reddit discussion, and for good reason: it’s often the fastest way to see a massive improvement with minimal code changes. Bun is an all-in-one toolkit, and its package manager is incredibly fast. More importantly for our Docker problem, its installation behavior can be less sensitive to the caching issues that plague a naive pnpm setup.
By simply swapping out the base image and the run commands, you can often sidestep the worst of the caching issues.
Your Dockerfile might change from this (pnpm):
FROM node:20-slim
WORKDIR /app
COPY . .
RUN npm i -g pnpm
RUN pnpm install --frozen-lockfile
RUN pnpm build
# ... rest of the file
To this (bun):
# Use the official bun image
FROM oven/bun:1
WORKDIR /app
COPY . .
# Bun install is incredibly fast
RUN bun install --frozen-lockfile
RUN bun run build
# ... rest of the file
Darian’s Take: Look, I’m not going to lie. This is a hacky fix, but it’s a very effective hack. If you’re in a firefight and need to stop the bleeding on your CI bill *right now*, this is a fantastic option. Just be aware that you’re treating the symptom, not the disease. You haven’t fixed your Docker layer caching strategy.
Fix #2: The “Right Way” – Mastering Multi-Stage Builds
This is the solution I push my teams to implement. It’s package-manager agnostic and the correct way to structure a Dockerfile for any Node.js project. The goal is to separate the dependency installation from the source code changes. We create layers that only change when they absolutely have to.
The magic is in copying only the package.json and lock files first, installing dependencies to create a stable, cached layer, and *then* copying your application source code.
Here’s a proper, multi-stage Dockerfile using pnpm:
# === Stage 1: Dependencies ===
FROM node:20-slim AS deps
WORKDIR /app
# Install pnpm
RUN npm i -g pnpm
# Copy ONLY the files needed for dependency installation
COPY package.json pnpm-lock.yaml ./
# Install dependencies. This layer is cached as long as the lock file doesn't change.
RUN pnpm fetch
RUN pnpm install -r --offline
# === Stage 2: Builder ===
FROM node:20-slim AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# We need pnpm again in this stage
RUN npm i -g pnpm
RUN pnpm build
# === Stage 3: Production Runner ===
FROM node:20-slim AS runner
WORKDIR /app
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
# Expose port and run the app
EXPOSE 3000
CMD ["node", "server.js"]
With this setup, changing your application code will only invalidate the cache from the COPY . . line in the builder stage onwards. The expensive pnpm install step in the deps stage remains cached and untouched. This is the bread and butter of professional Docker image optimization.
Fix #3: The ‘Nuclear’ Option – Advanced Caching with BuildKit
Sometimes, even with a multi-stage build, you might have a complex monorepo or a need to eke out every last second of performance. This is where you bring out the heavy artillery: Docker BuildKit’s cache mounts. This feature allows you to mount a cache directory that persists across multiple builds.
This is perfect for pnpm, as we can give it a persistent global store (.pnpm-store), making subsequent installs almost instantaneous.
To use it, you need to enable BuildKit (it’s the default on modern Docker versions) and modify your RUN command:
# Make sure you are running with BuildKit enabled
# DOCKER_BUILDKIT=1 docker build .
# Inside your Dockerfile...
# Copy package manifests
COPY package.json pnpm-lock.yaml ./
# Run install with a cache mount pointed to pnpm's store
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --frozen-lockfile
# The rest of your build...
Warning: This is powerful but adds complexity. Your CI system needs to support BuildKit and its caching features properly. It’s not a drop-in solution everywhere, but when you have hundreds of builds a day, the performance gains from a shared, persistent pnpm cache can be astronomical.
Which Path Should You Choose?
There’s no single right answer, only trade-offs. Here’s how I break it down for my team:
| Solution | Effort | Best For |
| 1. Switch to Bun | Low | Quick wins, emergencies, or teams already invested in the Bun ecosystem. |
| 2. Multi-Stage Build | Medium | The default, robust solution for almost all projects. This should be your goal. |
| 3. BuildKit Cache Mounts | High | Large-scale monorepos or performance-critical pipelines where every second counts. |
At the end of the day, a slow pipeline isn’t just an annoyance; it’s a drag on developer velocity and a real cost to the business. Taking an hour to properly structure your Dockerfile can save you hundreds of hours of waiting and thousands of dollars in compute costs over the life of a project. Now go fix those builds.
👉 Read the original article on TechResolve.blog
☕ Support my work
If this article helped you, you can buy me a coffee:

Top comments (0)