Rust WASM vs TypeScript Performance: Why the "Faster" Language Lost by 25%
A software engineer named Radek built a high-performance JSON parser in Rust, compiled it to WebAssembly, and shipped it to the browser. Then he rewrote the whole thing in TypeScript. The TypeScript version was 25% faster. Not marginally. Not within noise. Twenty-five percent. If you care about Rust WASM vs TypeScript performance at all, this should bother you. It bothered me.
I've been building web-facing systems for over 14 years, and I've watched the WebAssembly narrative harden into something uncomfortably close to dogma: compiled language equals faster, therefore Rust plus WASM equals the performance play. That framing isn't wrong. But it's missing a huge piece, and that missing piece can cost you months of engineering effort for literally negative returns.
Here's what actually happened, why it happened, and when you should (and shouldn't) reach for WebAssembly.
What Happened: The Case Study That Broke the Narrative
Radek documented his findings in a case study titled "WASM is not a Silver Bullet". He set out to build a fast JSON parser. Rust was the obvious choice. It compiles to tight, predictable machine code. WebAssembly gives you a way to run that code in the browser at near-native speed. On paper, slam dunk.
The Rust implementation worked. It was correct. It was well-optimized. Then he rewrote it in TypeScript, benchmarked both, and the TypeScript version won by roughly 25%.
The critical detail: the performance bottleneck wasn't the Rust code itself. The Rust was fast. The problem was what happened every time the Rust code needed to talk to JavaScript. Parsing JSON means processing every key and every value. Each one required a call across the boundary between the JavaScript host environment and the WASM module. That boundary crossing has a cost. When you're making thousands of those crossings per document, those costs compound into a wall.
The overhead wasn't in the computation. It was in the conversation between two runtimes that were never designed to chat this frequently.
Is WebAssembly Faster Than JavaScript?
Everyone asks this, and the honest answer is: it depends entirely on what you're doing.
WebAssembly provides predictable, ahead-of-time compiled performance. No garbage collection pauses, no JIT warmup, no type speculation. For workloads that are CPU-bound and self-contained, WASM is genuinely faster. Cryptography, video encoding, 3D rendering, physics simulations. The Figma team's use of C++ compiled to WASM for their rendering engine is probably the canonical success story. They reported a 3x improvement in load time.
But here's the thing nobody's saying about Rust WASM vs TypeScript performance: for workloads that require frequent interaction with the JavaScript environment, the bridge overhead can erase every advantage WASM provides. Aaron Turner, Senior Software Engineer at Fastly, has written about this directly: if you're calling an export on a WASM module thousands of times a second, the cost of those calls adds up fast.
I've hit this wall myself. After shipping a feature that used a Rust WASM module for text processing in a web app, we found the serialization and deserialization across the boundary ate most of our gains. The Rust code ran in microseconds. The boundary crossings added milliseconds. We ended up rewriting the hot path in TypeScript, and just like Radek, the native JS version was measurably faster for our specific workload.
The lesson is almost disappointingly simple. If your data has to cross the JS/WASM bridge on every operation, you're paying a toll that compiled code speed can't always offset.
Why V8's JIT Compiler Makes This Even More Surprising
The other half of the story is how absurdly good modern JavaScript engines have become. V8 has over 15 years of optimization work behind it. Its JIT compiler, TurboFan, does things that would have seemed like science fiction a decade ago.
As documented on the V8 developer blog, the engine's scanner and parser for string-heavy operations have been aggressively optimized. String manipulation, object property access, JSON handling. These are the bread and butter of JavaScript workloads, and V8 has specialized fast paths for all of them.
Here's what matters for this comparison: when your TypeScript code processes JSON, V8 isn't interpreting it naively. It's profiling your hot functions, generating optimized machine code on the fly, inlining where possible, and leveraging type feedback to eliminate overhead. For a task like JSON parsing — fundamentally string manipulation and object construction — you're running on a track that V8 has been specifically tuned for.
Rust compiled to WASM doesn't get these domain-specific optimizations. It gets generic, ahead-of-time compiled code that's fast in absolute terms but doesn't benefit from the decade-plus of JavaScript-specific tuning V8 provides. You're bringing a perfectly good sports car to a race that's been designed around a completely different vehicle.
This is one of those things where the boring answer is actually the right one. The technology that's been optimized for your exact workload will usually beat the theoretically superior technology that hasn't. If you're interested in how performance at the systems level can surprise you, the same principle applies to memory allocators. The "better" tool isn't always the faster one for your specific use case.
When WebAssembly Actually Wins (And It's Not Close)
I don't want to leave the impression that WASM is overhyped. For the right workloads, it's the clear winner and nothing else comes close.
The pattern where WASM dominates has clear characteristics:
- Computationally intensive, long-running tasks. Image processing, video encoding, cryptographic operations. The computation dwarfs any boundary-crossing cost.
- Minimal JS interaction. You pass a large buffer in, the WASM module crunches it, you get a result back. One or two boundary crossings, not thousands.
- Predictable performance requirements. No GC pauses, no JIT warmup jitter. Real-time audio, game engines, physics sims.
- Existing C/C++/Rust codebases you want on the web. Porting a proven library without rewriting it in JavaScript. Honestly, this is WASM's killer use case and it doesn't get enough credit.
The Squoosh image compression app from Google is a great example. It runs codecs like MozJPEG and WebP entirely in WASM, processing large image buffers with minimal boundary crossings. Near-native compression performance, right in the browser.
The anti-pattern is what Radek ran into: lots of small, frequent calls across the boundary. Think of it like a database. One query that returns 10,000 rows is fast. Ten thousand queries that each return one row will destroy your latency. The JS/WASM bridge has the same economics.
Having worked with distributed systems where similar boundary-cost trade-offs apply — like the overhead of crossing network boundaries in microservices — I can tell you this pattern is universal. The fastest code in the world doesn't help if you're spending all your time at the boundary.
What the JS/WASM Bridge Overhead Actually Costs
Let me get specific about what happens when JavaScript calls a WASM function. The engine has to:
- Convert JavaScript values to WASM-compatible types (serialization)
- Switch execution context to the WASM linear memory model
- Execute the WASM function
- Convert results back to JavaScript values (deserialization)
- Switch execution context back to JS
For numeric types, this is relatively cheap. Numbers map cleanly between JS and WASM. But for strings and complex objects — exactly the types a JSON parser works with constantly — there's real work involved. Strings in JavaScript are UTF-16 encoded, garbage-collected objects. In WASM's linear memory, they're just byte arrays. Every string crossing requires encoding, copying, and allocation on both sides.
Lin Clark, who worked extensively on WebAssembly at Mozilla, wrote about how the cost of calling between JS and WASM, while improving with each engine release, is not free. Her work on optimizing these calls showed significant improvements. But "fast" and "free" are different things. When you're crossing that bridge thousands of times per JSON document, even a fast toll adds up.
The community discussion around Radek's case study on Hacker News largely confirmed this. Multiple engineers pointed out the result was surprising but entirely logical given the nature of the task. The consensus wasn't that WASM is slow. It's that the workload profile matters more than the language speed.
Should You Rewrite Your WASM Module in TypeScript?
Probably not. But you should ask yourself a harder question before reaching for WASM in the first place.
Here's how I think about it after having shipped both approaches:
Reach for WASM when your computation is heavy, your boundary crossings are few, you need predictable latency, or you're porting an existing native codebase. Image manipulation, audio processing, game engines in the browser. WASM is the right call and nothing else comes close.
Stay in TypeScript when your workload involves heavy string manipulation, frequent object construction, lots of interaction with the DOM or JavaScript APIs, or when data flowing between JS and WASM would require constant serialization. For these tasks, V8's JIT is your friend, and the boundary cost is your enemy.
The question to ask: how many times per operation does data need to cross the JS/WASM boundary? If the answer is "once or twice," WASM will likely win. If the answer is "thousands," benchmark before you commit. The answer might surprise you exactly like it surprised Radek.
I've shipped enough features to know that the instinct to reach for the "faster" tool is strong. Rust is genuinely faster than JavaScript. Nobody disputes that. But in the browser, your Rust code doesn't run in isolation. It runs inside a JavaScript host, mediated by a bridge, competing against an engine that's had 15 years of optimization work. Context matters more than benchmarks. The workload profile matters more than the language.
The next time someone tells you WASM is always faster, ask them: faster at what? And how many times are you crossing the bridge to get there? Those two questions will save you more engineering time than any benchmark suite ever written.
Originally published on kunalganglani.com
Top comments (0)