DEV Community

Tiphis
Tiphis

Posted on

We Rewrote Our Rust WASM Parser in TypeScript — And It Got 3x Faster

We Rewrote Our Rust WASM Parser in TypeScript — And It Got 3x Faster

When the OpenUI team decided to port their Rust WebAssembly parser to TypeScript, everyone thought they were crazy. The result? A 3x performance improvement that challenges everything we know about compiled languages vs. interpreted ones.


The Unexpected Journey

In the world of WebAssembly, Rust has long been the gold standard for high-performance code. When the OpenUI team originally built their parser in Rust and compiled it to WASM, they expected blazing-fast performance. Instead, they encountered something unexpected.

"The original Rust implementation was correct, but it had several performance bottlenecks that we struggled to optimize away," the team explained. "After months of trying to squeeze more performance out of the Rust code, we made a radical decision: rewrite everything in TypeScript."

This wasn't a decision made lightly. The development team spent significant time researching and experimenting with different approaches. They benchmarked their Rust implementation, profiled every function, and tried numerous optimization techniques—from manually inlining hot paths to restructuring data layouts for better cache locality. Nothing seemed to work.

What they discovered challenged their assumptions about language performance in the WebAssembly ecosystem.

The Surprising Results

The results were nothing short of remarkable:

  • 3x faster parsing in the TypeScript version
  • Smaller bundle size despite TypeScript's reputation for code bloat
  • Better debugging experience with source maps and familiar tooling
  • Easier maintenance for the team
  • Faster iteration cycles during development

"We went from struggling with cryptic Rust compiler errors to having a codebase that our entire team could contribute to confidently."

This outcome might seem impossible given the traditional wisdom that Rust's compile-time guarantees should produce faster code. But the team identified several key factors that contributed to their success. The performance gains weren't accidental—they came from understanding the nuances of how WebAssembly actually executes in different environments.

Why TypeScript Won

1. Modern JavaScript Engine Optimizations

V8 (Chrome's JavaScript engine) and other modern JS runtimes have received years of optimization specifically for WebAssembly and JavaScript execution. The team leveraged:

  • Inline caching capabilities that Rust's static nature couldn't exploit
  • JIT compilation advantages that adapt to actual runtime patterns
  • Better memory management in real-world usage scenarios
  • SIMD instructions automatically utilized by modern JS engines

The key insight was that while Rust produces efficient WASM code, the surrounding JavaScript ecosystem has evolved to handle common patterns incredibly efficiently. Modern JIT compilers can often outpace statically compiled code for typical workloads because they can make runtime-informed decisions.

2. Avoiding Common Rust Pitfalls

The original Rust implementation suffered from several issues that the TypeScript rewrite naturally avoided:

  • WASM binding overhead: Rust's foreign function interface had unexpected costs when communicating between WASM and JavaScript
  • Debug vs Release builds: Rust's debug mode was significantly slower, complicating development and testing
  • Complex lifetime management: The borrow checker helped prevent bugs but added runtime complexity in some cases
  • Larger binary sizes: Rust's runtime added noticeable overhead to the final WASM output

3. Algorithmic Improvements

Perhaps most importantly, the rewrite allowed the team to:

  • Rethink core data structures from first principles
  • Implement streaming parsers that process data incrementally
  • Take advantage of JavaScript's native string handling capabilities
  • Simplify error handling without sacrificing correctness

The fresh perspective enabled by starting over led to fundamental architectural improvements that had more impact than any language-level optimization could provide.

What This Means for Developers

This story shouldn't be read as "Rust is bad" or "TypeScript is always better." Instead, it reveals several important lessons for developers making technology choices:

Choose the Right Tool for the Job

The performance landscape is more nuanced than "compiled = faster." Consider:

  • The target runtime's optimization level
  • The specific workload characteristics
  • Development velocity vs. peak performance tradeoffs
  • The size and experience of your team

Measure Before Optimizing

The team initially assumed Rust would be faster based on conventional wisdom. Only through actual benchmarking did they discover the truth. Premature optimization—and premature assumption-making—can lead you down the wrong path.

Language Isn't Everything

Algorithm design, data structure choice, and understanding your runtime often matter more than the programming language itself. A well-designed TypeScript application can outperform a poorly-optimized Rust application any day.

Challenge Your Assumptions

The tech industry is full of "everyone knows" statements that turn out to be oversimplifications. The OpenUI team's success came from questioning conventional wisdom and running their own experiments.

When to Still Choose Rust

This isn't a blanket endorsement of TypeScript over Rust for all WASM projects. Rust remains the better choice for:

  • Systems programming with tight memory constraints
  • Embedded development where runtime overhead must be minimal
  • Cryptographic operations requiring constant-time execution
  • Projects requiring fearless concurrency without garbage collection pauses

The Takeaway

The OpenUI team's experience demonstrates that the programming world is full of surprises. While Rust remains excellent for many use cases—systems programming, embedded development, and scenarios requiring zero-cost abstractions—it's not a universal performance silver bullet.

Sometimes the "obvious" choice isn't the best one. Sometimes a language considered "slower" can outperform a "faster" one in practice. The key is to stay curious, measure everything, and be willing to challenge your assumptions.

The best developers aren't the ones who know the most—they're the ones who know what they don't know and are willing to find out.


Have you had similar experiences with counterintuitive performance results? Share your stories in the comments below.


If you enjoyed this article, consider supporting my work:

ETH Wallet: 0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4

Top comments (0)