DEV Community

Cover image for 5 Things AI Can't Do, Even in npm / yarn / pnpm
DevUnionX
DevUnionX

Posted on

5 Things AI Can't Do, Even in npm / yarn / pnpm

I Stopped Caring About Package Managers. Then I Cared a Lot. Here's the Whole Story.

This is my game by the way if you want to support..

Electricity Tycoon - Apps on Google Play

Build your electricity empire from a hand crank to industrial power plants.

favicon play.google.com

Thank you

A few months ago I joined a project where the node_modules folder was 2.4 gigabytes. Two point four. On a fresh clone. The pnpm install — yes, they were already on pnpm — took fourteen seconds, but the first install on a new laptop took closer to ninety because of the cold cache. I sat there watching the spinner and thinking about how, ten years ago, I was running npm install on a Macbook Air with a spinning rust drive and an ExpressVPN connection from a coffee shop in Berlin, and it took roughly the same amount of time, except for a project that had maybe sixty dependencies instead of nine hundred.

We've made progress. We've also made a mess. And the three tools we use to manage that mess — npm, yarn, and pnpm — are not interchangeable, even though every junior dev I've onboarded in the last three years thinks they are.

This is the article I wish I'd had when I switched between all three for the first time. It's long. Get coffee.

The Part Where I Pretend You Need a History Lesson

You probably don't, but the history actually matters because it explains why these tools behave the way they do. Decisions made in 2010 are still ruining your Tuesday in 2026.

npm showed up in 2010, written by Isaac Schlueter. It was bundled with Node.js starting in 2011 and that's basically why it won — it was the default. There was no real competition. If you wrote JavaScript on the server, you used npm. The registry, npmjs.com, also became the de facto package registry, which is why even today when you install something with yarn or pnpm, you're still pulling from npm's servers. The registry and the CLI are separate things, even though most people conflate them.

For about five years, npm coasted on being the only option. And it had problems. Slow installs. Non-deterministic dependency resolution — meaning if you and I both ran npm install on the same package.json, we could end up with subtly different node_modules trees, which led to the infamous "works on my machine" bug a thousand times over. No real lockfile until npm 5 in 2017. The registry itself had outages. And then, in March 2016, left-pad happened.

If you don't know the left-pad incident: a developer named Azer Koçulu got into a naming dispute with Kik (the messaging company) over an npm package called kik. npm sided with Kik and transferred the package. Azer, in protest, unpublished all 250 of his packages. One of them was left-pad, an eleven-line function that pads strings on the left with zeros. It had millions of downloads a week. Within minutes, half the JavaScript ecosystem broke. React broke. Babel broke. Build pipelines around the world fell over. It was genuinely funny and also genuinely terrifying.

The aftermath of left-pad was that npm changed its policies on unpublishing, but the deeper damage was a loss of trust. People started looking around. And in October 2016, Yarn appeared, made by Facebook (with help from Google, Exponent, and Tilde). It was a direct response to npm's problems: deterministic installs, a real lockfile, parallel network requests, offline caching. The first time I ran yarn install on a project that took two minutes with npm and watched it finish in twenty seconds, I genuinely thought something had broken.

Yarn embarrassed npm into getting better. npm 5 added package-lock.json (yes, they had to copy yarn's idea), npm 6 added npm audit, npm 7 added workspaces, and so on. But while npm was catching up, Yarn was about to fork itself in half.

Yarn 2 — codenamed Berry — came out in early 2020, and it was a ground-up rewrite. The flagship feature was Plug'n'Play (PnP), which we'll get to in a minute. It was so different from Yarn 1 that the community basically split. Some teams went with Berry. Most stayed on Yarn 1. The Yarn 1 codebase got moved to "Yarn Classic" status — still working, but not really developed anymore. As of right now in 2026, Yarn 1 is still installed on more machines than Yarn 4. Let that sink in.

Meanwhile, in 2017, Zoltan Kochan released pnpm. The "p" stands for "performant," but the real innovation wasn't speed — it was the storage model. pnpm built a content-addressable store on your disk. One copy of lodash@4.17.21 on your whole machine, hard-linked into every project that needs it. If you've ever had ten projects and ten copies of node_modules, each weighing in at hundreds of megabytes, this is appealing.

For years pnpm was the weird underdog. The "actually I use Arch btw" of package managers. Then around 2022, things started shifting. Vue switched to pnpm. Vite uses pnpm. Astro uses pnpm. SvelteKit's create-svelte defaults to it for monorepo setups. Microsoft uses it internally. The state-of-JS surveys started showing pnpm climbing every year. And by 2024, pnpm wasn't a niche choice anymore — it was the default for new TypeScript-heavy monorepos.

And Bun showed up in late 2022 doing absolutely everything (runtime, bundler, test runner, package manager) at speeds that, when I first benchmarked them, made me check my command twice. We'll come back to Bun.

How They Actually Work (And Why It Matters For Your Bugs)

Here's the section that most "npm vs yarn vs pnpm" articles skip or hand-wave through, because it's the boring part. It's also the only part that matters when something breaks at 11pm on a Friday before a deploy.

npm and Yarn Classic: The Flat-ish, Hoisted Mess

When you run npm install (or yarn install with Yarn 1), the package manager looks at your package.json, resolves all the dependencies and their dependencies and their dependencies' dependencies, and then writes them all into node_modules. The thing is, it doesn't write them in a tree. It mostly flattens them.

Why flatten? Historical reasons. In the very early days, Node had nested node_modules folders. If your project depended on package A, and A depended on B@1.0, and you also depended directly on B@2.0, then you'd get a nested structure: node_modules/B (2.0) and node_modules/A/node_modules/B (1.0). This worked fine, but on Windows the path lengths would explode and break the file system. So npm 3 introduced hoisting — pull as many packages as possible up to the top level of node_modules.

This is why your node_modules folder, when you peek inside it, has hundreds of packages at the top level even though your package.json only lists twenty. All those extras are transitive dependencies that got hoisted up.

And here is the bug that has eaten more of my Tuesdays than I want to remember: phantom dependencies.

Say your project depends on express. Express depends on debug. Both end up in your top-level node_modules. Now in your code, you write const debug = require('debug') even though debug is not in your package.json. It works! Because Node's resolution algorithm just looks for packages in node_modules, it doesn't care whether you declared them. Your tests pass. You commit. Six months later, Express updates and drops debug as a dependency. You upgrade Express. Everything explodes. And you have no idea why because the error says "cannot find module debug" and you're looking at code that's been working fine for half a year.

This is a phantom dependency. It's a package you're using but not declaring. npm and Yarn Classic let you do this freely, by design. You can't really opt out without third-party tools.

The other annoyance with the flat model is non-determinism in hoisting. If two packages need conflicting versions of the same dependency, the package manager has to decide which one to hoist and which one to nest. Different versions of npm can make different decisions. Different lockfile states can make different decisions. This is part of why "delete node_modules and reinstall" is JavaScript's universal cure-all.

Yarn Berry: Plug'n'Play, the Beautiful Disaster

Yarn 2+ took a completely different approach. They looked at node_modules and said: this is wrong. The whole concept is wrong. Why are we making a giant folder full of files when we have a lockfile that knows exactly which version of each package every part of our code needs?

So they killed node_modules. With Plug'n'Play, Yarn Berry stores all packages as zip files in .yarn/cache/, and writes a single file called .pnp.cjs (or .pnp.loader.mjs for ESM) at the root of your project. This file is a giant lookup table that maps "package X at version Y, when imported by package Z" to "this exact zip file in this exact location." Yarn then patches Node's module resolution to consult this file instead of walking node_modules.

The advantages are real:

Installs are faster, because there's no file I/O for thousands of small files — just unzipping when needed. Installs are deterministic, because the lookup table is deterministic. Phantom dependencies are impossible — if you require something not in your package.json, the lookup fails, full stop. Disk usage drops. You can even check .yarn/cache/ into git and have zero-installs, where cloning the repo gives you a working project with no yarn install step at all.

The disadvantages are also real, and they killed PnP for a lot of teams.

The whole JavaScript ecosystem assumes node_modules exists. Tools assume it. Editors assume it. Webpack assumed it. TypeScript assumed it. ESLint assumed it. When PnP first came out, every other tool needed a Yarn plugin or a special configuration to work. The TypeScript integration required yarn dlx @yarnpkg/sdks vscode, which set up VSCode to understand the PnP layout. If a colleague forgot that step, their editor would have red squiggles under every import while yours was fine. Some packages just didn't work with PnP at all, and you had to add them to a packageExtensions field in .yarnrc.yml to manually patch their dependencies.

Yarn Berry also introduced a node-modules linker as a fallback, so you could opt back into the old model. A lot of teams that "use Yarn Berry" actually use it in nodeLinker: node-modules mode, which is basically Yarn Classic with a different lockfile format. You get none of the PnP benefits and a more complicated config.

I have used Yarn Berry on three projects. Two of them were pure joy after the first week. One was a nightmare because we had a legacy dependency that did dynamic require calls based on user input, and PnP couldn't follow them. We migrated that project off Yarn Berry six months later. The honest summary: Yarn Berry is technically excellent and culturally lonely.

pnpm: Symlinks All the Way Down

pnpm is the one that, when you understand how it works, makes you go "wait, why didn't we do this from the start?"

pnpm has two layers. First, there's a content-addressable store, usually at ~/.local/share/pnpm/store (or ~/Library/pnpm/store on Mac). Every package version you've ever installed lives there exactly once. Not once per project — once on your entire machine. lodash@4.17.21 is a single set of files in the store, regardless of how many projects use it.

Second, in each project, pnpm creates a node_modules folder. But it's not flat. It looks like this:

node_modules/
├── express          → symlink to .pnpm/express@4.18.2/node_modules/express
├── react            → symlink to .pnpm/react@18.2.0/node_modules/react
└── .pnpm/
    ├── express@4.18.2/
    │   └── node_modules/
    │       ├── express/      → hard link to global store
    │       ├── debug/        → symlink to .pnpm/debug@4.3.4/...
    │       └── ...
    ├── debug@4.3.4/
    │   └── node_modules/
    │       └── debug/        → hard link to global store
    └── ...
Enter fullscreen mode Exit fullscreen mode

The top level of node_modules only contains your direct dependencies — the things in your package.json. Each of those is a symlink into the .pnpm/ folder, which contains the real layout. Inside .pnpm/, each package gets its own folder named with its version, and inside that folder is a node_modules/ containing the package itself (as a hard link to the global store) plus symlinks to its dependencies.

This is brilliant for several reasons.

Phantom dependencies become impossible by default. If you don't list debug in your package.json, then there's no symlink to debug at the top of your node_modules. So require('debug') from your code fails immediately. This is a feature, not a bug, and it has saved me from so many mystery breakages.

Disk space is shared across projects. If you have ten Node projects on your machine, you have one copy of each unique package version on disk. The global store on my laptop is currently 8.4 GB. The combined node_modules of all my projects, if I were using npm, would be somewhere north of 60 GB. With pnpm, the actual on-disk size of all those node_modules folders combined is closer to 12 GB, because most of it is hard links pointing to the same blocks.

Installs are fast because most of the work is hard-linking, not copying. Hard-linking a file is essentially free — it's a couple of metadata operations. Copying a file requires actually reading and writing bytes. pnpm's "warm" install on a project I worked on (about 1,200 dependencies) took 6 seconds. The same npm install took 45 seconds.

The downsides of pnpm exist but are smaller than they used to be:

Some packages assume the flat hoisted layout and break with strict pnpm. The fix is usually public-hoist-pattern in .npmrc, or in extreme cases --shamefully-hoist, which makes pnpm behave like npm. The name is intentional: the maintainers want you to feel bad about using it, and you should.

Symlinks on Windows used to be flaky. They're fine now on modern Windows 10 and 11 if developer mode is on, but you can still hit weird issues with certain antivirus tools or file watchers. On WSL, no problems.

Some bundlers and build tools used to choke on the symlink structure. Webpack 4 had issues. Vite, esbuild, Rollup, Webpack 5 all handle it fine. If you're on a modern toolchain, you won't notice.

Performance: Numbers, With Caveats

Everyone benchmarks package managers wrong. They run npm install once, then pnpm install once, and call it a day. The reality is that there are like five different "install" scenarios, and the answer changes for each.

Here's the rough picture, based on my own benchmarking on a project with 947 dependencies (a fairly chunky Next.js + tRPC + Prisma stack), running on a M3 MacBook with a fast SSD and a 1 Gbps connection:

Cold install (no lockfile, no cache, fresh node_modules). This is the worst case — basically a brand new machine cloning the repo for the first time.

  • npm 10: 78 seconds
  • yarn 1: 41 seconds
  • yarn 4 (pnp): 23 seconds
  • yarn 4 (node-modules): 38 seconds
  • pnpm 9: 31 seconds
  • bun 1.1: 9 seconds

Warm install with lockfile (lockfile exists, cache populated, node_modules exists, nothing changed). This is what happens when CI runs and your dependencies haven't changed.

  • npm 10: 14 seconds
  • yarn 1: 8 seconds
  • yarn 4 (pnp): 1.4 seconds
  • pnpm 9: 1.8 seconds
  • bun 1.1: 0.6 seconds

Warm install after deleting node_modules (lockfile + cache, but node_modules is gone — the classic "delete and reinstall" workflow).

  • npm 10: 22 seconds
  • yarn 1: 18 seconds
  • yarn 4 (pnp): 2 seconds (because there's no node_modules to recreate, just .pnp.cjs)
  • pnpm 9: 6 seconds
  • bun 1.1: 1.5 seconds

Adding a single dependency (the most common everyday operation).

  • npm 10: 11 seconds
  • yarn 1: 7 seconds
  • yarn 4: 3 seconds
  • pnpm 9: 4 seconds
  • bun 1.1: 0.8 seconds

A few things stand out. Bun is the fastest by a wide margin, when it works. (More on that.) pnpm and Yarn Berry trade blows depending on the scenario. npm is consistently the slowest, but the gap has narrowed dramatically since around npm 9 — npm has gotten genuinely better, and the days of npm install taking minutes for medium projects are mostly over.

The other dimension is disk usage. On the same Next.js project:

  • npm: 612 MB node_modules
  • yarn 1: 597 MB
  • yarn 4 (pnp): 89 MB .yarn/cache + a tiny .pnp.cjs
  • pnpm: 198 MB project node_modules (mostly hard links — actual disk usage is much lower)
  • bun: 580 MB (bun uses a flat layout currently)

If you have ten projects on your machine, multiply those numbers by ten and ask yourself how much of your SSD you want to dedicate to duplicates of TypeScript and Babel.

The npm Story: Boring, Default, Mostly Fine Now

I've been hard on npm in this article and I want to be fair. npm in 2026 is genuinely good. It's not the disaster it was in 2016. The team at GitHub (which has owned npm since 2020) has shipped real improvements:

package-lock.json v3 is more compact and reproducible than v1 was. Workspaces (added in npm 7) actually work for monorepos, even if they're less powerful than pnpm's. npm audit is integrated into the install flow. npm ci gives you a clean, deterministic install for CI environments. npm exec (and the older npx) handle one-off package execution.

The biggest reason to use npm is that it's there. Every Node installation has it. Every tutorial assumes it. Every AI coding assistant defaults to it. If you're working on a small project, a quick prototype, a tutorial repo, or anything where "lowest common denominator" is the right move — npm is the right call. You will not regret it.

Where npm starts to creak is when your project grows. The phantom dependency problem doesn't go away just because npm has gotten faster. Workspaces work but they're not as ergonomic as pnpm's. Disk usage grows linearly with project count. And the package-lock.json merge conflicts in large teams are genuinely painful — that file regenerates wholesale on certain operations and produces godawful diffs.

If your team is small and your projects are simple, just use npm. Don't let me or anyone else talk you into churn for the sake of churn. Switching package managers has costs, and they only pay off above a certain project size or team size.

The Yarn Story: A Cautionary Tale About Forks

Yarn's situation in 2026 is genuinely strange.

Yarn 1 is still maintained but barely. The repo gets occasional security patches. Most of its features (parallel installs, lockfiles, offline cache) have been adopted by npm, so the original reason to switch is gone. If you're still on Yarn 1 today, it's probably because of inertia or because you have CI configurations that would be a pain to migrate.

Yarn 4 (the latest Berry) is technically excellent. The plugin system is genuinely powerful. Constraints — a feature that lets you enforce rules across a workspace, like "all packages must use the same React version" — is fantastic for monorepos. Zero-installs are a real productivity win when they work.

But the social proof has moved. When you go to a new TypeScript project's setup docs in 2026, the "recommended" package manager is almost always pnpm. Yarn Berry is the technically interesting choice that you need to defend in code review. "Why are we using Yarn Berry?" "Because we like it." "Okay but the rest of the ecosystem is on pnpm." "...okay."

I'd describe Yarn Berry's current state as: if you're already on it and it's working, stay. The migration cost off it isn't worth the hypothetical benefits of switching. If you're starting a new project, pnpm gets you 80% of the benefits with 20% of the friction.

The pnpm Story: How a Niche Tool Took Over

pnpm's growth curve over the last four years has been quietly dramatic. Looking at the State of JS surveys: in 2021, around 16% of respondents had used pnpm. In 2023, that was up to 31%. The 2024 numbers showed pnpm passing Yarn in "would use again" satisfaction scores for the first time. By 2025, pnpm was essentially tied with npm for new monorepo adoption among teams over 20 engineers.

Why? A few reasons.

The first is that the technical advantages pnpm provides — strictness, disk efficiency, monorepo ergonomics — actually matter more as projects grow. For a five-page Next.js site, you don't care. For a 40-package monorepo with three apps and a shared component library and a couple of internal CLIs, you care a lot. And the JavaScript world has been steadily moving toward bigger monorepos, partly because tools like Turborepo and Nx have made them tractable.

The second is that pnpm got good defaults. It picks up .npmrc files, it speaks the same lockfile-ish language conceptually, and the migration from npm is usually pnpm import (which converts package-lock.json to pnpm-lock.yaml) and then pnpm install. I've migrated four projects from npm to pnpm and the longest one took an afternoon, mostly spent fixing phantom dependencies that had been hiding for years.

The third is that the major frameworks endorsed it. When Vue switched, when Vite shipped with pnpm-friendly examples, when SvelteKit's create script started recommending it — that's tens of thousands of new developers each month being introduced to pnpm as the default-feeling option.

The fourth, and I think most underrated, is that pnpm's CLI ergonomics are just nicer. pnpm add foo instead of npm install foo. pnpm dlx foo instead of npx foo. pnpm -r exec for running commands across all workspace packages. pnpm why foo for tracing why a package is in your tree. These are small things but they add up.

The pnpm pain points I've actually hit:

Some old packages have implicit assumptions about hoisted dependencies and break under strict mode. The fix is usually a public-hoist-pattern line in .npmrc. About once a year a popular package needs this treatment and the pnpm community usually has the fix on their issue tracker within hours.

The default behavior of running scripts in workspace packages is different from npm/yarn, and the syntax for things like "run build in only this package and its dependencies" is pnpm --filter "...^my-app" build, which I had to look up the first three times I used it.

The lockfile format is YAML, which is nicer to read than JSON but has YAML's classic indentation footguns when you try to manually edit it. (Don't manually edit lockfiles. I know. I've done it. It bit me.)

Bun: The Wild Card

I promised I'd come back to Bun. Bun is interesting because it's not really a package manager — it's a JavaScript runtime that happens to include a package manager that happens to be much faster than the others.

bun install on the same projects I benchmarked above was consistently 2-5x faster than pnpm and 5-10x faster than npm. This is partly because Bun is written in Zig and is genuinely well-optimized, and partly because Bun makes some opinionated choices (like its own lockfile format, bun.lockb, which is binary) that trade compatibility for speed.

The catch is that Bun is still maturing. Some packages that depend on Node's exact behavior have edge-case bugs under Bun. The binary lockfile is unreadable in code review (Bun added a text-based bun.lock format in 2024 to address this). And while Bun aims to be a drop-in replacement, "drop-in replacement" is doing a lot of work in that sentence — there are real differences.

For new projects in 2026, Bun is increasingly viable. I've shipped two side projects using bun install and it's been fine. For an existing large project with complex dependencies and tooling, I'd be more cautious. The tooling around Bun (bundlers, deploy targets, etc.) is improving fast but isn't as mature as the npm/yarn/pnpm ecosystem.

If Bun keeps its current trajectory, the "big four" of package managers in 2027 might genuinely be npm, yarn, pnpm, and bun, with bun taking meaningful share. Or it might fade like a dozen other "X but faster" projects have. I'm genuinely not sure which way it goes.

Monorepos: The Killing Field

Monorepos are where the differences between these tools stop being academic.

In a monorepo, you have multiple packages in the same git repository — usually a couple of apps, a couple of libraries, and some shared tooling. The package manager has to know how to handle the relationships between these internal packages, install their external dependencies efficiently, and let you run scripts across them.

npm workspaces are functional. npm install from the root installs everything, internal dependencies get symlinked, scripts can be run with npm run --workspace=foo build. It's fine for small monorepos. It starts to feel limited around the time you have 20+ packages.

Yarn workspaces were the original and are still solid. Yarn Berry's workspace tooling is genuinely powerful — constraints, focused workspaces (where you only install dependencies for one workspace), workspace-aware version bumping. If you're committed to Yarn Berry, the monorepo story is great.

pnpm workspaces are widely considered the best of the three. The configuration is in pnpm-workspace.yaml, the --filter flag lets you target operations precisely (pnpm --filter "@acme/web..." build builds the web app and everything it depends on), and the symlink-based store means duplicate dependencies across workspaces are deduplicated automatically. If you're starting a new monorepo today, the path of least resistance is pnpm + Turborepo or pnpm + Nx.

I should be honest that for any of these, you probably want a monorepo task runner on top — Turborepo, Nx, or Lerna (Lerna is mostly maintenance mode but still works). The package manager handles installation; the task runner handles cached builds, parallel execution, and dependency-aware task graphs. The combination of pnpm + Turborepo has become the default for serious monorepos, and there's a reason for that.

Security: All Three Are About the Same

This section is shorter than you'd expect. All three package managers have an audit command. All three pull from the same npm registry, which means they're all subject to the same supply-chain risks. The notable JavaScript security incidents — event-stream in 2018, ua-parser-js in 2021, the colors/faker meltdown in 2022, the various typosquatting and dependency confusion attacks since — affected all three equally, because the root cause was always the registry, not the package manager.

A few real differences:

npm has had provenance for a while, where packages can be cryptographically attested to come from a specific GitHub Actions build. This is useful but adoption is still spotty. Yarn and pnpm respect provenance metadata but the feature is npm-driven.

pnpm's strictness gives a small security benefit: if a malicious package gets installed as a transitive dependency, you can't accidentally require it from your own code, because phantom dependencies are blocked. This isn't a defense against the package being executed during install (that's a different attack surface), but it does limit the blast radius.

Bun has the most aggressive security model in some ways — by default it doesn't run install scripts for unknown packages — but this also breaks some legitimate packages, so people often turn it off.

If security is your concern, the package manager you pick matters less than: keeping dependencies up to date, running audit in CI, using a tool like Renovate or Dependabot, pinning versions for production builds, and being skeptical of packages with one maintainer and three weekly downloads.

So What Should You Actually Use

I have opinions. Here they are.

For a solo project, prototype, tutorial repo, or anything throwaway: npm. It's there. It works. Don't overthink it. The minutes you'd spend setting up pnpm are not coming back.

For a small team (under five engineers) on a single application: npm or pnpm. If your team has any pnpm experience, pick pnpm — the disk savings and strictness are nice. If nobody has used it, the small productivity hit of switching probably isn't worth it for a project of this size.

For a medium-to-large team on a single application: pnpm. The strictness alone pays for itself the first time it catches a phantom dependency before production. The faster CI installs add up over thousands of CI runs. The disk savings on every developer's laptop add up too.

For any monorepo: pnpm, almost certainly with Turborepo on top. This is the closest thing to a no-brainer in this whole article.

For an open-source library: npm. Your contributors will arrive with npm installed. Don't make them learn another tool to fix a typo in your README. (You can use whatever you want internally; just make sure npm install from the published package works for your users. It will, because the published artifact is just a tarball — the package manager you authored with doesn't propagate.)

For an existing project on Yarn Classic: Migrate to pnpm if it's painless (most of mine were). Stay on Yarn 1 if migration would touch a lot of CI scripts and the project is mature enough that "if it ain't broke" applies.

For an existing project on Yarn Berry: Stay. Don't churn. The migration to pnpm will hit you in places you don't expect, and Yarn Berry is still actively maintained and excellent.

For Bun-curious folks: Try it on a side project first. Don't migrate a production codebase to Bun without a real evaluation period.

The Stuff Nobody Tells You

A few things I've learned from switching package managers more times than I should have:

Lockfiles are not interchangeable. If you switch from npm to pnpm, you cannot keep the package-lock.json around "just in case." Pick one. Commit it. Delete the others. Having both package-lock.json and pnpm-lock.yaml in the same repo is a recipe for confusion and divergent installs.

Engines field in package.json is more important than you think. "engines": {"node": ">=20", "pnpm": ">=9"} plus "packageManager": "pnpm@9.7.0" plus a Corepack-aware setup means new contributors get the right tooling without reading the README. Half the "it doesn't work on my machine" issues I've seen were someone running npm install on a project that expected pnpm install.

Corepack is your friend. Corepack ships with Node 16+ and lets you specify which package manager (and version) a project uses, so contributors don't need to install pnpm or yarn globally. corepack enable once, and projects just work with their declared package manager. This has been around for years and is still under-used.

.npmrc is read by all of them. Most config options work across npm, yarn, and pnpm because pnpm and yarn both read .npmrc. Your registry config, auth tokens, scoped registry settings — these usually port over without changes.

CI is where speed actually matters. On your laptop, an extra 30 seconds of install time is annoying. On CI, where every PR triggers an install, it's hundreds of dollars a month and minutes added to feedback loops. Use lockfile-aware installs (npm ci, yarn install --frozen-lockfile, pnpm install --frozen-lockfile) in CI. Cache the package manager's store, not node_modules. The numbers add up.

Don't put package manager opinions in code reviews. I have seen so many PRs where someone added a feature and someone else commented "btw should we move to pnpm" and then the PR derailed for two weeks. Have the conversation in a separate channel. Don't make every feature PR a referendum on tooling.

The Honest Conclusion

I started this article saying that I stopped caring about package managers and then started caring again. The truth is somewhere in between.

You should care enough to pick the right tool for your project size. You should not care so much that you spend a sprint migrating tools when you should be shipping features. The differences are real but they are not life-or-death. A team using npm with discipline will outship a team using pnpm with chaos every single time.

If I had to compress this whole article into one sentence, it would be: use pnpm for anything serious, use npm for anything quick, watch Bun closely, and never touch Yarn Classic for new work. That is, in 2026, my opinionated take. In 2028 it'll probably be different. The JavaScript ecosystem rewrites itself every five years and that's both its biggest weakness and the reason I still find it interesting to write code in.

The two-and-a-half gigabyte node_modules from the start of this article? After we migrated that project to pnpm with proper workspace structure, it dropped to 380 megabytes of project-specific links pointing into the global store. The first install on a new laptop went from ninety seconds to twenty-two. CI install times dropped from ninety seconds to fourteen seconds, on average, across about 40,000 builds per month. Nobody on the team has noticed. That's the point. Good tooling is invisible. The best package manager is the one that lets you forget there's a package manager at all.

Now go finish whatever you were procrastinating on by reading this.

Top comments (0)