I almost picked wrong.
We were rebuilding our frontend platform at work — five apps, three shared component libraries, a design token package, and a CLI tool that nobody fully understood anymore. Classic monorepo chaos. I spent about two weeks seriously evaluating both Turborepo (2.4) and Nx (21.2) before committing, and I kept flip-flopping right up until the last day. This post is my honest account of that process, including the part where Nx confused me for three hours straight before I figured out I was looking at the wrong documentation version.
Both tools are genuinely good. That's the frustrating thing. Choosing between them isn't about which one is broken — it's about which flavor of complexity you're willing to live with.
The Codebase I Actually Used for Testing
Context matters here. I wasn't doing a toy "hello world" comparison. The repo I was migrating had:
- ~340k lines of TypeScript across everything
- A Next.js app, a Vite/React app, a Node API, and two internal tooling packages
- A team of 9 engineers, about 4 who regularly touch the monorepo config
- CI on GitHub Actions, currently averaging around 18 minutes per full build (we wanted that under 8)
I set up each tool in a branch, ran them both through our actual CI pipeline, and lived with them daily for a week each. I also had a side project — a smaller, four-package monorepo I maintain alone — where I could experiment more aggressively without breaking anyone else's Friday.
Worth mentioning: I was already on pnpm workspaces for both codebases. That affects the setup experience pretty significantly.
Turborepo: Fast to Start, Occasionally Frustrating to Debug
The onboarding story for Turborepo is genuinely good. You run npx create-turbo@latest or drop a turbo.json into an existing workspace, define your task pipeline, and you're off. For our existing codebase, I had a working incremental build — with local caching — inside about two hours. That includes me reading the docs carefully, not skimming.
The turbo.json pipeline definition is expressive but approachable. Here's roughly what ours looked like after the first pass:
{
"$schema": "https://turbo.build/schema.json",
"tasks": {
"build": {
"dependsOn": ["^build"], // ^ means "wait for dependencies to build first"
"outputs": [".next/**", "dist/**"]
},
"test": {
"dependsOn": ["^build"],
"cache": true,
"inputs": ["src/**/*.ts", "src/**/*.tsx", "**/*.test.ts"]
},
"type-check": {
"dependsOn": ["^build"],
"cache": true
},
"lint": {
"cache": true,
"inputs": ["src/**/*.ts", "**/.eslintrc*"]
}
}
}
The inputs field on test and lint tasks was something I underutilized at first. Once I started being specific about what actually affects cache invalidation, our hit rate went from about 60% to around 88% on typical PRs. That's the kind of tuning that pays off.
What genuinely caught me off guard was how opaque the cache miss debugging is. You get MISS in the output and that's... kind of it? There's a --verbosity=2 flag that helps, and the newer turbo run build --summarize output is better than it used to be. But I spent a full afternoon on a Friday trying to figure out why one package kept missing the cache. Turned out I had a .env.local file getting picked up as an implicit input. Totally reasonable behavior in retrospect, but the path from "why is this missing" to "oh, it's that file" was longer than it should have been.
Remote caching through Vercel is great if you're already on Vercel. We're not. The self-hosted remote cache options have improved — there are solid open source implementations like ducktape and a few others — but it's an extra setup step that Nx handles more natively in my experience.
If your team is small-to-medium, your stack is mostly Next.js or Vite, and you want to be productive fast, Turborepo will not let you down. The ceiling is real but it's high enough for most teams.
Nx: More Power, More Everything (Including More Config)
Nx is a different kind of beast. Calling them comparable tools is like calling a Swiss Army knife and a chef's knife comparable cutting implements — they both cut things, but they're optimized differently.
The thing that immediately differentiates Nx is the project graph and the nx affected command. When you run nx affected -t test, Nx doesn't just look at which packages have changed — it understands the actual dependency graph of your workspace and runs tests only for packages that could be affected by your changes. Not just "this package changed" but "this package imports from that package which changed." On our 9-person team, this cut our average CI test time dramatically on feature branches. We went from running ~220 test suites to ~30-60 depending on what changed.
Turborepo does affected filtering too, via --filter=...[HEAD^1] or similar git-based flags, but Nx's implementation felt more reliable and more deeply integrated into how the tool thinks about your workspace.
The generator system is either a massive win or annoying overhead depending on your perspective. Running nx g @nx/react:library my-ui-lib scaffolds a complete package with correct tsconfig references, barrel exports, jest config, and a README. I thought this would feel patronizing — I know how to set up a package — but after the third time I had to bootstrap a shared utility library from scratch, I came around. The consistency is actually the point.
Here's an example of the Nx project config for one of our packages. Nx uses either project.json or inline targets in package.json:
// packages/design-tokens/project.json
{
"name": "design-tokens",
"$schema": "../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "packages/design-tokens/src",
"targets": {
"build": {
"executor": "@nx/js:tsc",
"outputs": ["{options.outputPath}"],
"options": {
"outputPath": "dist/packages/design-tokens",
"main": "packages/design-tokens/src/index.ts",
"tsConfig": "packages/design-tokens/tsconfig.lib.json"
}
},
"test": {
"executor": "@nx/jest:jest",
"outputs": ["{workspaceRoot}/coverage/packages/design-tokens"],
"options": {
"jestConfig": "packages/design-tokens/jest.config.ts"
}
}
}
}
More verbose than Turborepo's turbo.json approach? Yes. But the executor abstraction means Nx can do things Turborepo can't — like running builds through Nx Cloud's distributed task execution, which farms individual tasks out to multiple CI agents. For large repos, this is not a minor feature.
The downside is the learning curve is genuinely steep. The docs are extensive but I hit multiple moments of "I know what I want to do but I can't find the right term to search for." I spent three hours once trying to figure out why my inferred targets weren't picking up my vite.config.ts — the issue was that I'd installed the wrong version of @nx/vite for my Nx core version, and the error message pointed me in a completely wrong direction. I'm not sure if they've fixed the version-mismatch error messaging since then, but it bit me hard enough that I remember exactly which week it was.
Also: Nx Cloud's pricing has gotten more reasonable in 2025-2026, but it's still a conversation you'll have with your finance team. The free tier is generous for smaller teams but you'll hit limits on larger repos.
Build Speed Wasn't the Real Decision
Midway through week two — I was testing Nx at this point — I tried to benchmark actual build times side by side. Both configured to use remote caching, both with warm caches, building the same set of packages. The times were surprisingly close. Within 15% of each other on a cold build, essentially identical on a warm cache hit.
I thought the whole decision would come down to build speed. It did not.
What actually differed was the day-to-day experience of maintaining the monorepo over time. Who updates the configs when you add a new package? How painful is it to onboard an engineer who hasn't touched the build system? What happens when a plugin doesn't support the new version of a framework you just upgraded to?
Turborepo ages gracefully because there's genuinely less of it. When something breaks, there are fewer moving parts to inspect. When a new engineer joins and asks "how does CI work?", I can show them turbo.json and they get it in ten minutes.
Nx ages powerfully but demands maintenance. The plugin ecosystem means you get a lot for free — until you're stuck waiting for an @nx/next update to support the version of Next.js you just upgraded to, while Turborepo users just... don't have that problem, because Turborepo doesn't own your framework integration.
I pushed an Nx plugin upgrade on a Friday afternoon once (yes, I know) and it silently broke the executor for one of our Vite apps. Didn't catch it until Monday morning. That specific failure wasn't entirely Nx's fault — there was a peer dependency issue I should have caught — but Turborepo's thinner abstraction layer means there are fewer of those landmines to step on.
My Call
Use Turborepo if your team is under ~15 engineers, you're building primarily JS/TS web apps, and you want something you can fully understand and debug yourself. The configuration surface is small enough that you can hold the whole mental model in your head. Remote caching works well with self-hosted options if you're not on Vercel. You'll get 80% of the benefit with 20% of the complexity.
Use Nx if you're running a larger engineering org (20+ engineers), you have polyglot needs (Angular, React, Node, maybe even some Go or Rust tooling through custom executors), or you genuinely need distributed task execution. The nx affected intelligence pays bigger dividends at scale — it's more valuable on a 60-package repo than a 10-package one. Accept the learning curve, invest in it properly, and it pays back.
For us — 9 engineers, mostly TypeScript, primarily web — I went with Turborepo 2.4. We got our CI from 18 minutes down to 6 minutes with a warm remote cache, and the config is something every engineer on the team can read and understand without me needing to explain it. That last part turned out to matter more than I expected.
Nx is the more powerful tool. Turborepo is the more maintainable one for our situation. Your numbers might push you the other way, and if your team is already comfortable with Nx's model or you're inheriting an existing Nx setup, there's no reason to migrate — the tool is excellent. But if you're starting fresh and you're not sure yet whether you need everything Nx offers, start with Turborepo and migrate later if you hit the ceiling. Going from Turbo to Nx is more straightforward than the reverse.
Whichever you pick — actually configure the remote cache. Local caching alone leaves a lot of speed on the table, and it's the single highest-leverage thing you can do once the basic pipeline is working.
Top comments (0)