DEV Community

Moon Robert
Moon Robert

Posted on • Originally published at blog.rebalai.com

Deno 2.0 in Production: Six Months of Migration From Node.js and What Actually Changed

I resisted Deno for years. Partly stubbornness, mostly because the original pitch — "what if Node but without npm" — felt like solving the wrong problem when you have a working product and a team that knows the Node ecosystem cold. Then Deno 2.0 dropped in October 2024 with full npm compatibility, and one of my teammates (Priya, our resident runtime nerd) kept saying "no seriously, look at the tooling story." I gave in.

We're a three-person backend team running a Node/TypeScript API that handles event processing for a SaaS product. Not huge — around 800 req/s peak, PostgreSQL backend, a handful of third-party integrations. The kind of service that's boring by design. I started the migration in September 2025 on a non-critical service as a proving ground, then moved one of our production APIs to Deno 2.2.x in January. Here's what I learned.

What "Node.js Compatible" Means in Practice (It's Not Magic)

The headline feature of Deno 2.0 was npm compatibility via the npm: specifier, and it works better than I expected — with a few important asterisks.

Most of your npm dependencies just... work. Express, Fastify, zod, axios, date-fns, jose — all loaded fine. You drop a deno.json in the root and reference them:

// deno.json
{
  "imports": {
    "express": "npm:express@^4.21.0",
    "@/": "./src/"
  },
  "tasks": {
    "dev": "deno run --allow-net --allow-read --allow-env src/main.ts",
    "test": "deno test --allow-net --allow-env"
  }
}
Enter fullscreen mode Exit fullscreen mode

No package.json. No separate tsconfig.json. TypeScript works out of the box — Deno treats .ts files as a first-class citizen, so you skip the whole ts-node or tsx setup dance. For a greenfield project this feels obvious. Migrating an existing codebase, you carry two setups for a while, which is annoying but manageable.

Where it gets complicated: native addons. We use a library with some native C++ bindings, and that required --allow-all plus some creative workarounds. Deno 2.x can handle a node_modules directory being present (you can run deno install and it'll populate one), but native addons are still an edge case that isn't fully ironed out. I ended up keeping that particular integration on a small Node sidecar. Not ideal, but the blast radius was small.

Prisma was the other thing. By early 2026 the Deno-Prisma story has improved considerably compared to 2024 — Prisma 6.x has much better first-class support — but there were edge cases around the query engine binary that took me an afternoon to debug. The GitHub issue tracker (prisma/prisma #24218, if you want to go spelunking) has the painful details.

If your dependency list is mostly pure-JS/TS packages, the migration is low-friction. If you have native addons or heavy meta-frameworks, scope that work separately before you commit.

The Tooling Story Is the Real Selling Point

I've written probably a dozen ESLint configs in my life. Each one slightly different. Each one with at least one person on the team who disagrees with the semicolon rule. Deno ships with a formatter (deno fmt) and a linter (deno lint) that are opinionated, fast, and require zero configuration. You just run them. No eslint.config.js, no .prettierrc, no argument about whether trailing commas go after the last function parameter.

The test runner is similarly no-ceremony:

// handlers/health_test.ts
import { assertEquals } from "jsr:@std/assert";
import { createApp } from "../src/app.ts";

Deno.test("GET /health returns 200", async () => {
  const app = createApp();
  const req = new Request("http://localhost/health");
  const res = await app.fetch(req);
  // No test framework config, no jest.config.ts, no separate setup file
  assertEquals(res.status, 200);
});
Enter fullscreen mode Exit fullscreen mode

deno test picks that up automatically. For a small team where everyone's already stretched thin, removing that category of tooling toil mattered more than I expected.

deno compile is underrated for deployment. We can now ship a single self-contained binary of one of our smaller services, which simplified our Docker setup considerably. The binary is larger than I'd like — roughly 80MB for a small API — but cold starts are nonexistent and the deployment story is much cleaner.

I know the permissions model is the first thing people push back on. I was annoyed by it initially — --allow-net and --allow-read and --allow-env feel like ceremony when you're trying to get something running fast. But I pushed a config change on a Friday afternoon that accidentally let a dependency try to write to /tmp in a way we hadn't expected, and the permission system caught it before it reached prod. You get a stack trace pointing exactly to where the filesystem access was attempted. In Node you'd have gotten nothing, until something went wrong downstream. I'm a convert.

The Night Something Actually Broke in Production

Most migration writeups skip this part. Here it is.

We had a memory leak. Not from our code — from a combination of our event loop handling and a third-party WebSocket library that had slightly different behavior under Deno's runtime than under Node. The leak was slow enough that it took about six hours of production traffic to manifest, which meant we caught it around 2am on a Tuesday. Not fun.

The root cause: Deno uses Web-standard APIs wherever possible. WebSocket in Deno behaves according to the browser spec, which is great for consistency, but some npm WebSocket libraries have code paths that assume Node's net/stream internals and fall back to different — in this case, leaky — behavior when those aren't present. The library in question was ws@8.x, and the fix was switching to Deno's native WebSocket API, which meant rewriting a chunk of our connection management layer.

What surprised me — and I thought I understood the compatibility layer well by that point — was how hard it was to spot in the heap snapshots. The leak showed up in what looked like native code, not in anything we'd written. I spent several hours convinced the problem was somewhere completely different before Priya ran a targeted reproduction script that isolated it.

I'm not 100% sure this would have surfaced the same way if we'd been on Deno.serve() from the start rather than routing through Express. My hunch is yes, because the underlying WebSocket handling was in a shared module. But the failure mode was confusing enough that I'm not confident.

What I'd do differently: before migrating, audit every package that touches network I/O or streams, and specifically check whether there are open issues about Deno compatibility. The npm compatibility layer is solid. It is not a guarantee.

Performance Reality Check — My Numbers, Not the Marketing Benchmarks

Every runtime publishes benchmarks showing itself winning. Here's what I actually saw on our hardware — a pair of AWS c6g.xlarge instances running ARM.

For a basic JSON API endpoint (fetch from Postgres, serialize, return):

  • Node 22.x + Fastify: ~42,000 req/s, ~12ms p99 latency
  • Deno 2.2 + Deno.serve(): ~47,000 req/s, ~10ms p99 latency

That's real — roughly 11% higher throughput in our workload. But in actual production, with real traffic patterns and database contention, it's more like 5-8% better. I'll take it. You're not cutting your infra bill in half.

Startup time is where Deno shines for our use case. Node 22 with TypeScript (we were using tsx) added about 400ms to cold starts. Deno 2.2 TypeScript startup for the same code was around 80ms. If you're running short-lived workers or anything Lambda-adjacent, this matters a lot. For long-running API servers, not so much.

Memory usage was roughly comparable, maybe 10% lower under Deno in steady state. Not a deciding factor either way.

My Actual Recommendation

If you're starting a new TypeScript backend project right now, use Deno. The zero-config TypeScript, the built-in toolchain, the standards-first approach — these compound over the life of a project in ways that are hard to see upfront but very visible 12 months in when you're not fighting tooling debt. JSR has grown meaningfully since 2024 and the jsr:@std/* library covers most common tasks solidly.

If you have an existing Node project, the calculus is harder. The migration path is real but not zero-cost, especially with native addons, heavy framework dependencies (don't try Next.js or Remix — seriously), or a large team that needs to internalize the permissions model. We spent about three weeks of total engineering time across the team on the migration, including the WebSocket incident. For a three-person team, that's non-trivial.

The thing that would push me toward migrating an existing project: if you're already planning TypeScript tooling work — upgrading tsconfig, moving off an old bundler, updating your test setup — bundle the Deno migration into that work. You're paying the context-switch cost anyway, so you might as well land somewhere better on the other side.

One place I'd still stay on Node: anything with a hard dependency on Express middleware you can't easily replace, or any project where native addons are a core dependency, not a peripheral one. The compatibility layer handles most things. Not everything.

We're keeping both migrated services on Deno, and I'm planning to move a third one over in Q2.

Top comments (0)