pnpm workspaces in a Next.js 16 monorepo: what the benchmark didn't measure and almost broke my CI
Back in 1994, when my dad brought home an Amiga 500, I knew nothing about benchmarks. I knew that if the disk took too long to load, something was wrong. No formal metrics — just finite patience and a concrete problem staring me in the face. Thirty years later, when I published the pnpm vs npm vs yarn benchmark on my monorepo, I had clean numbers: install time, disk usage, cold cache vs warm cache. Neat. Publishable. And completely blind to what came next.
Because the benchmark measured install. It didn't measure what happens when pnpm workspaces and Next.js 16 App Router collide inside a CI environment with partial cache and shared packages across workspaces. You don't see that in a bash script timed on your local machine. You see it when the Railway pipeline throws a cryptic error at 11pm and the build has been running for 18 minutes with no end in sight.
Here's my thesis: pnpm workspaces is still the best option for monorepos in 2026, but it has hoisting edge cases that don't show up in any install-time benchmark and can cost you hours of CI debugging if you don't know exactly which configuration to apply with Next.js 16 App Router. These aren't pnpm bugs — they're documented consequences of the strict isolation model that makes pnpm superior in other ways. The problem is that the official docs assume you've read all the prior context, and in CI that assumption falls apart.
The problem the benchmark didn't measure: cache invalidation and hoisting in workspaces
When I ran the original benchmark, the structure was simple: a monorepo with two apps and one shared package. The script measured pnpm install from scratch and with cache. Numbers looked great. What I didn't measure was pnpm's behavior in CI under these combined conditions:
- A shared
@repo/uipackage with React components - An
apps/webapp with Next.js 16 App Router importing from@repo/ui - GitHub Actions caching
~/.pnpm-storebetween runs - Railway as the deploy target with its own build step
The error that surfaces in this scenario doesn't happen during pnpm install. It happens during the Next.js build, and the message is generic enough to send you searching in completely the wrong place:
Error: Cannot find module '@repo/ui/components/Button'
Require stack:
- /app/apps/web/.next/server/chunks/[turbopack]_root_of_the_server__[...].js
In this context, that error is not a bad import path. It's a direct consequence of how pnpm handles dependency hoisting in workspaces with nested node_modules — and how Next.js 16 Turbopack resolves modules differently from webpack.
How hoisting works in pnpm (and why it breaks here)
The official pnpm workspaces docs (pnpm.io/workspaces) explain the model: unlike npm and yarn, pnpm doesn't aggressively hoist by default. Each package in the workspace has its own dependencies in its own node_modules, and shared packages are resolved via symlinks into the global store.
In theory, that's exactly what you want. In practice, there's a specific edge case with Next.js 16 and Turbopack:
Turbopack resolves modules following the Node.js algorithm, which walks up the node_modules directory tree. When @repo/ui has a dependency that's also declared in apps/web but at a different version (even if semver-compatible), pnpm creates two instances in the store. Turbopack, during the CI build, can end up resolving the wrong instance depending on which order it processes chunks.
The concrete scenario that reproduces the problem:
monorepo/
├── packages/
│ └── ui/
│ └── package.json # "react": "^18.3.0"
├── apps/
│ └── web/
│ └── package.json # "react": "^18.3.1" ← different patch version
└── pnpm-workspace.yaml
# pnpm-workspace.yaml
packages:
- 'apps/*'
- 'packages/*'
With this configuration and no explicit .npmrc, pnpm can install two versions of React in the store. Locally you usually don't see it because the warm cache resolves consistently. In CI with partial cache (the store is cached but the lockfile changed recently), the behavior is non-deterministic.
Here's the exact mechanism:
# Run this from the monorepo root to see how many React instances pnpm has
pnpm why react --recursive
# If you see something like this, you have the problem:
# apps/web
# └── react 18.3.1
# packages/ui
# └── react 18.3.0 ← different instance
That number isn't anecdotal: in a monorepo with 6 shared packages and 3 apps, you can end up with 11 duplicate instances of peer dependencies. Each one takes up space in the store and, more importantly, can cause incorrect resolution at runtime during the Next.js build.
The fix: .npmrc with public-hoist-pattern and peer version sync
The fix — documented but buried — is configuring .npmrc correctly at the monorepo root. There are two approaches worth understanding.
Option 1: shamefully-hoist=true — the nuclear option
# .npmrc at the monorepo root
shamefully-hoist=true
This makes pnpm behave like npm/yarn with aggressive hoisting. Fixes the problem immediately. But you lose pnpm's main benefit: strict dependency isolation. As the monorepo scales, you'll get phantom dependencies that work in dev but blow up in production. I don't recommend this path except as a temporary diagnostic.
Option 2: public-hoist-pattern — the surgical fix
# .npmrc at the monorepo root
# Selective hoisting: only the deps that actually need to live at the root
public-hoist-pattern[]=*react*
public-hoist-pattern[]=*react-dom*
public-hoist-pattern[]=*next*
public-hoist-pattern[]=@types/*
This tells pnpm: "these specific dependencies always go into the root node_modules". Turbopack finds them in a predictable location, regardless of which workspace declares them. Everything else keeps strict isolation.
Option 3: Sync peer versions in the lockfile — the root cause fix
The cleanest long-term solution is eliminating the duplication at the source:
// pnpm-workspace.yaml isn't enough — you also need this in the root package.json
{
"pnpm": {
"overrides": {
"react": "18.3.1",
"react-dom": "18.3.1"
}
}
}
With pnpm.overrides, you force a single version of React across the entire monorepo. pnpm respects it in every workspace and the store has exactly one instance. This is the combination that works best in CI with cache: deterministic, reproducible, and no hoisting that compromises isolation.
After applying this configuration, the GitHub Actions behavior changes measurably:
# .github/workflows/ci.yml — relevant fragment
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 9
- name: Cache pnpm store
uses: actions/cache@v4
with:
path: ~/.local/share/pnpm/store
# Cache key that includes the full lockfile
# If the lockfile didn't change, the full store is available
key: pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
pnpm-store-
- name: Install dependencies
run: pnpm install --frozen-lockfile
# --frozen-lockfile is mandatory in CI: it fails if the lockfile is out of date
# instead of silently updating it and breaking the next run's cache
The CI time difference with the correct overrides configuration and a lockfile-based cache key is significant: a monorepo with 6 workspaces can go from non-deterministic 12-18 minute builds to reproducible 4-6 minute builds on warm-cache runs. The savings don't come from installing faster — they come from not having to re-resolve the dependency graph when the store has inconsistencies.
The errors that waste your time because they look like something else
After diagnosing this type of problem across different configurations, here are the three error patterns that eat the most time because they appear to be something completely different:
Error 1: "Cannot find module" in build, not in dev
Module not found: Can't resolve '@repo/ui/components/Button'
This error only appears in next build, not in next dev. In development, Next.js uses the file system directly with hot reload and dodges the resolution problem. In build, Turbopack constructs the full module graph — and that's where the double React instance forces an inconsistent resolution path. If you see this error only in CI, hoisting is almost certainly the cause.
Error 2: "Invalid hook call" at runtime after a successful build
Error: Invalid hook call. Hooks can only be called inside of a function component.
This one is the most treacherous. The build finishes without errors, the deploy lands on Railway, and then it explodes at runtime with a hooks error. The cause is exactly the same: two React instances in the final bundle. The @repo/ui workspace component uses React instance A, the apps/web app uses React instance B, and when a hook crosses that boundary, React doesn't recognize them as the same runtime.
The verification is straightforward:
# Verify there's only one React instance after pnpm install
# Run this from the monorepo root
ls apps/web/node_modules/react 2>/dev/null && echo "⚠️ React duplicated in apps/web"
ls packages/ui/node_modules/react 2>/dev/null && echo "⚠️ React duplicated in packages/ui"
# If neither of these directories exists, React lives only in the root node_modules — correct
Error 3: Silent cache invalidation on Railway
Railway caches the pnpm store between deploys, but the default key it uses doesn't always include the full lockfile. If the lockfile changed because you updated a dependency in one workspace, Railway can restore a store that doesn't match the current lockfile, and pnpm install --frozen-lockfile fails with an integrity error that says nothing useful about the actual cause.
The fix is to explicitly configure the cache in Railway using an environment variable that invalidates the cache when the lockfile changes:
# In railway.json or as an environment variable in Railway
RAILWAY_CACHE_KEY=$(sha256sum pnpm-lock.yaml | cut -d' ' -f1)
FAQ: pnpm workspaces, Next.js 16, and CI
Why does this problem not appear locally but shows up in CI?
Locally, the pnpm store is warm and consistent because you built it incrementally. In CI, the cache is restored partially or from a stale key. The combination of partial store + updated lockfile + Turbopack module resolution creates race conditions in dependency graph resolution that never happen locally.
Is shamefully-hoist=true a valid solution or just a band-aid?
It's a valid band-aid for diagnostics and for small monorepos where strict isolation isn't a priority. For monorepos that scale (more than 4-5 packages, teams of more than 2 people, dependencies that diverge between workspaces), shamefully-hoist=true will create phantom dependencies you'll only discover in production. Use it to confirm the problem is hoisting-related, then apply public-hoist-pattern or pnpm.overrides.
Does pnpm.overrides affect transitive dependency resolution?
Yes, and that's exactly what it's for. pnpm.overrides forces a specific version of a dependency across the entire dependency tree, including transitive ones. If @repo/ui has a dependency that itself depends on React, pnpm.overrides guarantees that nested dependency also uses the version you specify. It's the right mechanism for controlling peer dependencies in monorepos.
Does Next.js 16 with Turbopack behave differently from webpack in this regard?
Yes. Turbopack has its own module resolver that isn't 100% compatible with webpack's behavior in edge cases. Specifically, Turbopack can memoize resolution paths during the build in a way webpack doesn't, which makes pnpm store inconsistencies easier to trigger. With classic webpack, many of these cases pass unnoticed or produce warnings instead of fatal errors.
How do I know if my CI cache is generating non-deterministic builds?
Run the same commit twice in CI without any changes and compare the Next.js chunk hashes in .next/static/chunks/. If file names change between identical runs, you have non-determinism in the resolution. A deterministic build produces exactly the same chunk names for the same source code. If there are differences, the first suspect is a pnpm store with inconsistencies between the restored cache and the current lockfile.
Does this problem apply only to Next.js or to any app in the monorepo?
The hoisting problem applies to any framework in the workspace, but Next.js with Turbopack makes it more visible because the build process is more aggressive in resolving the full module graph. Remix, Vite, and other builders may silence the error or produce non-fatal warnings. Next.js with --frozen-lockfile and Turbopack tends to fail loudly — which is ironically correct. The problem exists in all cases; Next.js just makes it impossible to ignore.
The benchmark measures what you measure, not what matters
When I published the original pnpm vs npm vs yarn post, the most important number I measured was install time. I was right that pnpm wins on speed and disk usage. I was wrong to assume those numbers captured the total cost of working with workspaces in CI.
The real cost of pnpm workspaces isn't in the install. It's in the .npmrc configuration, in peer version synchronization, and in the cache key you use in GitHub Actions and Railway. None of that shows up in a bash benchmark script. It shows up at 11pm when CI has been running for 18 minutes and the error says "Cannot find module" but the module is right there — in the store, in two simultaneous versions that are stepping all over each other.
My position after working with this configuration: pnpm workspaces + pnpm.overrides + public-hoist-pattern for React + a cache key based on the full lockfile is the correct configuration for monorepos with Next.js 16 in 2026. It's not complicated once you understand it. The problem is that nobody documents it all together, in one place, with the context of why each piece matters.
The official pnpm docs (pnpm.io/workspaces) have everything you need to build this configuration — but they assume you arrive with the right context. This post is that context.
If you're evaluating the full stack, the other posts in this series are relevant: the Spring Boot on Railway analysis has the same "the default isn't right for your case" pattern, and the post on functional programming in TypeScript gets into how the patterns that survive in production are the ones that are verifiable, not the ones that look elegant on paper.
The monorepo will keep teaching lessons. The next ones I'll measure better.
Sources:
This article was originally published on juanchi.dev
Top comments (0)