DEV Community

Cover image for What They Learned Building a Rust Runtime for TypeScript — and What I Can't See Objectively
Juan Torchia
Juan Torchia

Posted on • Originally published at juanchi.dev

What They Learned Building a Rust Runtime for TypeScript — and What I Can't See Objectively

In 2022, I brought a query down from 40 seconds to 80ms with a composite index. That day taught me something no tutorial ever had: high-performance systems aren't built by adding more code — they're built by eliminating friction. I didn't add anything to the system. I added metadata, an auxiliary structure that let the database engine do less work. When I read about a team that built a Rust runtime for TypeScript, that's what I think about. Not the glamour of Rust. The decision of where to put the friction.

Before I get into it: I already wrote about deadlocks and Surelock in Rust last week. I already got burned. And I've spent several posts working through 9 TypeScript patterns. I'm biased in both directions. I know that. I'm going to try to read this as clean as I can anyway.

Rust Runtime for TypeScript Performance: What We're Actually Talking About

The project took the TypeScript runtime — the layer that executes your transpiled TS code — and replaced critical parts of the pipeline with Rust implementations. It's not a new transpiler. It's not a full compiler. It's surgical: they identified the specific bottlenecks in the execution process and reimplemented them in a language with no garbage collector, manual memory control, and zero-cost abstractions.

Their reported results: latency reductions up to 10x on I/O-heavy operations. Faster cold starts. Lower memory footprint in lambdas.

That all sounds incredible. And parts of it are incredible. But there are details that bother me.

The Three Design Decisions I Think Are Wrong

1. The FFI Boundary Is in the Wrong Place

When you mix Rust with another runtime, you have to decide where the boundary between the two worlds lives. They chose to expose the interface at the string serialization level — meaning data crosses the boundary as JSON strings that then get deserialized on the Rust side.

That's a problem. JSON parsing isn't free. In high-frequency operations, you're paying the serialization/deserialization cost on every single call. It's the equivalent of having a brilliant caching system and then wrapping it in an unnecessary compression layer to transfer it.

// What the boundary does in their implementation (approximation)
const result = await rustRuntime.execute(
  JSON.stringify(payload) // ← this is the problem
);
const parsed = JSON.parse(result); // ← and so is this

// What it should do: typed binary protocol
// MessagePack, FlatBuffers, or directly shared memory
// to avoid serialization on the hot path
Enter fullscreen mode Exit fullscreen mode

The right alternative, in my opinion, is a typed binary protocol — MessagePack or FlatBuffers — or directly working with shared memory for the hot path. JSON overhead in a high-performance runtime is exactly the kind of friction you're trying to eliminate.

2. The Threading Model Assumes a Usage Pattern That Isn't Universal

Rust has a concurrency model that's genuinely superior for many cases. But the TypeScript runtime has a single-threaded event loop by design. The team's decision was to use a Rust thread pool to handle parallel operations, with the coordination logic on the TypeScript side.

The problem: that inverts the control hierarchy. TypeScript ends up being the orchestrator of a system that should be coordinated from Rust. It's like putting the HTTP client in charge of routing decisions in your microservices architecture — technically it works, but the responsibility is in the wrong place.

For CPU-bound workloads this doesn't matter much. For I/O-bound workloads with high concurrency — which is exactly where TypeScript shines today — the coordination overhead can eat the gains whole.

3. Hot Reload Is Broken by Design

This is more pragmatic than architectural, but I think it matters: the development cycle with this runtime is noticeably slower. Every time you change TypeScript code that touches the Rust boundary, you need to recompile. In local development that can mean 30-60 seconds of waiting.

I know that doesn't matter in production. But development time does matter. If a "high-performance" runtime makes your devs 30% less productive during development, the trade-off isn't nearly as clear as the benchmarks make it look.

It's the same problem I see when talking about adopting new languages — technical performance doesn't live in a vacuum. It lives inside a team, with workflows, with feedback cycles.

The One Decision That's Genuinely Brilliant

Now, the good stuff: what I think is actually brilliant.

The team decided that Rust would never touch TypeScript's object model. Never. The Rust layer is completely opaque to the TS type system — it knows nothing about classes, interfaces, or generics. It only speaks in buffers and operations.

That sounds like a limitation. In reality it's an enormous strength.

It means the Rust runtime can be updated independently of the TypeScript ecosystem. When TypeScript 6 ships with changes to the type system (and it will), the Rust runtime won't need to update. The abstraction barrier is so clean that both systems can evolve independently.

// The Rust runtime knows nothing about this:
interface User<T extends Identifiable> {
  data: T;
  metadata: Record<string, unknown>;
}

// It only sees this:
// [u8; N] — a byte buffer with a size
// That's it. No types. No objects. No inheritance.
pub fn process_buffer(input: &[u8]) -> Vec<u8> {
    // low-level logic completely agnostic to the domain
    // no coupling to the TypeScript type system whatsoever
    input.iter().map(|&b| b.wrapping_add(1)).collect()
}
Enter fullscreen mode Exit fullscreen mode

That design decision — keeping the Rust layer completely domain-agnostic — is exactly the kind of thing that separates a well-designed system from one that's going to be a headache in 3 years.

It reminds me of what we do with Docker: the container knows nothing about your application. It only knows about processes, networks, volumes. If you want to go deeper on that philosophy of abstraction, there are curated Docker resources for beginners that work exactly that idea.

The Benchmarks They Don't Show You

Every time I see performance benchmarks, I look for what's not in the graph. In this case:

They don't show the 99th percentile. They show p50 and p95. The p99 — where your worst-experience users live — is absent. In systems with intermittent garbage collection (like the JS runtime they're replacing), p99 can be 10x p95. If the Rust runtime improves p99 as much as it improves p50, that's the number that should be in the headline.

They don't show the impact on memory errors. Rust eliminates an entire class of bugs — use-after-free, double-free, data races. That has real production value that never shows up in latency benchmarks. It's the same kind of invisible benefit that surfaces in systems like the real-time bus visualizer I built — the interesting part isn't always the one you can easily measure.

They don't show onboarding cost. How many TypeScript developers on your team can debug a problem in the Rust layer? Probably none, or very few. That doesn't appear in any benchmark, but it's real.

Why I Still Think This Is Interesting

Despite everything I just said, I think the experiment is valuable. Not because of the numbers — because of the question it asks.

How much of the performance we lose in TypeScript systems is inherent to the language, and how much is the runtime implementation? That question has enormous implications for how we design systems.

If the bottleneck is the runtime's memory model, Rust can help. If the bottleneck is your API design, your unindexed queries, your cache architecture — Rust isn't going to change anything. And that's something I learned in the most concrete way possible in 2022.

In that sense, it connects to what I see in on-device AI projects like what Apple is doing with local models — sometimes a performance constraint forces you into design decisions that turn out to be correct for completely different reasons than you expected.

It also reminds me of the transport data sonifier I built: when you have a real performance constraint (processing thousands of GTFS-RT events in real time), the trade-offs get concrete very fast. Theory evaporates.

FAQ: Rust Runtime TypeScript Performance

Do I need to learn Rust to use a Rust runtime for TypeScript?
Not to use it. Yes to debug it. This is the most common trap: you adopt the technology in production and when something fails in the Rust layer, your team doesn't have the tools to diagnose it. If you're going to adopt this, you need at least one person with Rust knowledge who can read stack traces and understand the memory model.

In what cases does a Rust runtime actually make sense for TypeScript performance?
Cases where the bottleneck is CPU-bound with repetitive low-level operations: protocol parsing, binary data encoding/decoding, cryptography, compression. For typical REST APIs with a database, the bottleneck is almost always in the queries or network I/O — Rust won't help you much there.

What's the difference from Deno or Bun, which also have high-performance components?
Deno uses Rust internally but the programming model is completely TypeScript — you never expose the Rust layer to the developer. Bun uses Zig. What the project in this post does is different: it creates an explicit boundary between TypeScript and Rust that the developer has to manage. More control, more complexity.

Does the FFI (Foreign Function Interface) overhead between TypeScript and Rust cancel out the gains?
Depends on the granularity of the calls. If you cross the boundary once per request with a large payload, the overhead is negligible. If you cross the boundary thousands of times per request with small payloads, it can end up worse than not using Rust at all. The boundary design is probably the single most critical decision in the entire architecture.

Is this comparable to WASM for TypeScript?
WebAssembly is conceptually similar but with different constraints. WASM can run in the browser and on the server, has a sandboxed security model, and has better tooling support today. Rust-to-native has less overhead and more OS access. For serverless and edge computing, WASM probably wins on operational simplicity.

Is it worth the jump if I already have well-optimized TypeScript?
Probably not, unless you've exhausted the standard optimizations: database indexes, caching, lazy loading, Node's native worker threads. Most TypeScript systems that "need Rust" actually need a DBA to look at the queries or someone to actually read the profiler carefully. I brought 40 seconds down to 80ms without touching the language — just metadata.

The Runtime Is the Wrong Question

After reading this line by line, here's where I land: building a Rust runtime for TypeScript is technically fascinating and probably inadequate for 95% of the use cases where people are going to want to apply it.

The decision to keep Rust completely agnostic to the TypeScript type system is brilliant and should influence how we think about abstraction boundaries in general. The decisions around FFI, threading, and developer experience are improvable, and I hope future versions address them.

But more than anything: if you're looking at this thinking "this is going to solve my performance problems" — first, run an actual profiler. Look at where your time is going. In 90% of cases, you'll find the problem isn't the runtime. It's an unindexed query, an external API call without a timeout, an array you're iterating twice when you could do it once.

Rust is an extraordinary tool for specific problems. And like every extraordinary tool, the biggest danger isn't using it wrong — it's using it on the wrong problem.

Are you running TypeScript in production at scale? Where did you find your real bottlenecks? I genuinely want to know.

Top comments (0)