Look, I need to tell you something embarrassing.
Six months ago, I was that person in the engineering meeting saying "Let's rewrite the parser in Rust! WASM is the future!" I'd read all the HackerNews threads. I'd seen the benchmarks. Native code always wins, right?
We shipped it. First paint went from 600ms to 850ms. Users started complaining. I wanted to disappear.
Then last week, the OpenUI team published their case study: "We rewrote our Rust WASM parser in TypeScript and it got 3x faster."
I read it three times. Every paragraph felt like they were describing our exact situation. The boundary tax. The JSON serialization. The memory copies. It was all there.
So we did the same thing. We rewrote our entire React rendering pipeline in pure TypeScript.
First paint: 850ms → 320ms. Bundle size: 195KB → 112KB. Time to Interactive: 1200ms → 480ms.
Here's the story. The data. The mistakes. And why WASM isn't always the answer.
The Dream: Rust + WASM = Speed
When we started building PitchShow (https://pitchshow.ai), we knew performance would be critical. Our users generate AI presentations in real-time, streaming LLM output directly into beautifully animated slides.
The architecture looked like this:
- LLM streams markdown-like syntax
- Parser converts it to an AST
- React reconciler builds the component tree
- Framer Motion animates everything
- DOM renderer displays the slides
The parser runs on every streaming chunk. If the LLM sends 50 tokens, that's 50 parse calls. Latency matters here. A lot.
So we thought: "Let's write the parser in Rust, compile to WASM, and get native speed."
Sounds reasonable, right?
The Reality: WASM Boundary Tax is Real
Here's what we didn't understand: The boundary between JavaScript and WASM is expensive.
Every time our TypeScript code called the WASM parser, this happened:
- Memory allocation in WASM linear memory
- Copy input string from JS heap → WASM heap
- Rust parsing (fast part!)
- JSON serialization of the result
- Copy JSON string back to JS heap
- JSON deserialization by V8
The actual parsing? That was 15% of the time.
The other 85%? Overhead.
We tried to optimize. We used serde-wasm-bindgen to skip JSON and return JS objects directly. But that made things 30% slower because now we were doing hundreds of tiny boundary crossings per parse.
OpenUI hit the exact same wall. Their quote:
"The fundamental issue remained that the constant context switching and memory management between the two environments could not compete with the efficiency of running the entire pipeline natively in TypeScript."
Yeah. That.
The Data: TypeScript Was 4.6x Faster
We benchmarked three approaches:
- Rust/WASM (with JSON) — our original approach
- Rust/WASM (serde-wasm-bindgen) — attempted optimization
- TypeScript (pure V8) — the rewrite
Methodology:
- 30 warm-up iterations to stabilize JIT
- 1000 timed iterations using
performance.now() - Real LLM-generated component trees as fixtures
- Median latency reported
Results:
- Rust/WASM (JSON): 2.4ms per call
- Rust/WASM (bindgen): 3.1ms per call (worse!)
- TypeScript: 0.52ms per call
TypeScript was 4.6x faster than our original WASM implementation.
But wait, there's more.
Memory Usage: 60% Reduction
WASM linear memory is a separate heap. You can't share objects with JavaScript without copying. So our parser was allocating memory in two places:
- WASM heap for parsing
- JS heap for the output objects
When rendering 1000 slides (a typical large presentation), peak memory usage:
Rust/WASM: 45MB
TypeScript (JS heap): 28MB
TypeScript + object pooling: 18MB
We added a simple object pool to reuse AST nodes between parses. That's something you can do easily in TypeScript but would be painful in Rust/WASM because of ownership semantics across the boundary.
The Framer Motion Problem
Okay, so we fixed the parser. But we still had another bottleneck: Framer Motion in Next.js.
Framer Motion is fantastic for animations. But the full bundle is 89KB gzipped. For a real-time streaming app, that's a lot.
Plus, we were running into hydration issues in Next.js 14. Server-rendered HTML would flash, then React would take over and re-render everything with animations. Users saw a visible "pop."
The Solution: LazyMotion + Transform-Only Animations
We switched to LazyMotion with the m component type. This reduces bundle size to 34KB by loading animation features on-demand.
Then we audited every animation. Here's the performance breakdown:
Key lesson: Stick to transform and opacity. Animating width, height, or complex SVG paths triggers layout recalculations and will kill your 60fps target.
Example of what we changed:
// ❌ Before: width animation (16.6ms per frame)
<motion.div
initial={{ width: 0 }}
animate={{ width: '100%' }}
/>
// ✅ After: transform animation (2.8ms per frame)
<motion.div
initial={{ scaleX: 0 }}
animate={{ scaleX: 1 }}
style={{ transformOrigin: 'left' }}
/>
Bundle size comparison across libraries:
Real-World Impact: Production Metrics
We deployed the TypeScript rewrite three weeks ago. Here's what changed in production (data from 10,000+ presentations):
Before (Rust/WASM):
- First Paint: 850ms
- Time to Interactive: 1200ms
- FPS during animation: 45
- Bundle size: 195KB
After (TypeScript):
- First Paint: 320ms (62% faster)
- Time to Interactive: 480ms (60% faster)
- FPS during animation: 59 (smooth!)
- Bundle size: 112KB (43% smaller)
User complaints about "laggy UI" dropped to near zero. Our support team noticed. Our investors noticed. Hell, I noticed.
Development Velocity: The Hidden Cost
Performance isn't the only thing that improved. Here's the part nobody talks about:
Developing in TypeScript is way faster than Rust/WASM.
Why? A few reasons:
No compile step. Change code, refresh browser. Done. With Rust, we were waiting 15-30 seconds for
wasm-pack build.No FFI debugging. When something broke, we'd spend hours figuring out if it was a Rust bug, a WASM boundary issue, or a JavaScript problem. With TypeScript, stack traces are clean.
Easier onboarding. Our frontend team knows TypeScript. They don't know Rust. Simple as that.
Better tooling. VS Code autocomplete, inline type errors, hot module replacement — all work perfectly with TypeScript. With WASM, you lose a lot of that.
When we were hiring, candidates would ask: "Do I need to know Rust?" After the rewrite: "Nope, just TypeScript and React."
That's a big deal for a small team.
When WASM Does Make Sense
Okay, so am I saying WASM is bad? No. I'm saying WASM has a use case, and ours wasn't it.
WASM makes sense when:
Heavy computation — Image processing, video encoding, physics simulations. Stuff that runs for hundreds of milliseconds without crossing the boundary.
Minimal boundary crossings — You send large blobs of data, process them entirely in WASM, and send large blobs back. Not thousands of tiny calls.
Existing native codebases — If you already have a C++ or Rust library that works, porting to WASM might make sense. But don't rewrite something just to use WASM.
No streaming — If your parser runs once on a full input, WASM overhead might be negligible. But for streaming LLM output? Bad fit.
For PitchShow, we're doing:
- Thousands of tiny parsing calls (bad for WASM)
- Real-time streaming (latency-sensitive)
- Tight integration with React (lots of JS objects)
TypeScript was the right tool for the job. We should have realized that earlier.
The Technical Deep Dive: How We Rewrote It
Alright, for the engineers who want the details, here's how we rebuilt the parser in TypeScript.
The Six-Stage Pipeline
Our parser follows the same structure OpenUI described:
- Autocloser — Ensures partial LLM output is syntactically valid (e.g., closes open tags)
- Lexer — Tokenizes the input string
- Splitter — Organizes tokens into statements
- Parser — Builds an AST using recursive descent
- Resolver — Resolves variable references
- Mapper — Converts AST to React component props
Each stage is pure TypeScript, no external dependencies.
Key Optimization: Object Pooling
In streaming scenarios, we're constantly allocating and deallocating AST nodes. V8's garbage collector is good, but we can do better.
class ASTNodePool {
private pool: ASTNode[] = [];
acquire(type: string): ASTNode {
return this.pool.pop() || new ASTNode(type);
}
release(node: ASTNode): void {
node.reset();
this.pool.push(node);
}
}
This reduced GC pauses from 15-20ms to under 5ms during heavy streaming.
Handling Malformed LLM Output
LLMs don't always generate perfect syntax. The autocloser handles common mistakes:
- Unclosed tags:
<Card→<Card /> - Missing quotes:
color=red→color="red" - Unescaped characters:
"It's great"→"It's great"
We buffer the last 50 tokens to detect incomplete structures. If the LLM stream ends mid-tag, we automatically close it.
This was easier to implement in TypeScript than Rust because we could use string methods and regex directly without FFI.
Lessons Learned
Here's what I wish I knew six months ago:
1. Profile Before You Optimize
We assumed the parser was the bottleneck. It wasn't. The boundary crossings were. If we'd profiled first, we would have caught this immediately.
Chrome DevTools → Performance tab → "Bottom-Up" view shows you exactly where time is spent. Use it.
2. Benchmark Realistically
Our initial Rust benchmarks looked great. But we were testing with a single large input. Real usage was thousands of tiny inputs. The benchmark didn't match production.
Lesson: Benchmark the actual workload, not a synthetic one.
3. Consider the Team
WASM was a cool technology. But it made onboarding harder, debugging slower, and iteration cycles longer. For a startup, velocity matters more than raw performance.
Lesson: Choose tools your team can iterate quickly with.
4. WASM Isn't a Silver Bullet
Rust is faster than JavaScript for CPU-bound tasks. But when you add the WASM boundary, memory copies, and serialization, you can easily lose those gains.
Lesson: The fastest code is the code that doesn't cross boundaries.
What's Next for PitchShow?
Now that our renderer is fast, we're focusing on:
- AI narrative design — Using Claude to suggest better slide structures based on content analysis
- Voice-driven editing — "Make the title bigger" should just work
- Real-time collaboration — Google Docs-style multiplayer for presentations
All of this benefits from the faster rendering pipeline. When the UI is instant, you can build more interactive features on top.
We're also open-sourcing parts of our parser. If you're building a streaming UI component library, you might find it useful. Check out our GitHub: github.com/pitchshow (repo coming soon).
The Bottom Line
WASM is powerful. But it's not magic.
If your workload involves:
- Frequent JavaScript ↔ WASM calls
- Small data payloads
- Tight integration with JS libraries
Then pure TypeScript might be faster. Seriously.
We learned this the hard way. Hopefully, you won't have to.
If you're building AI-powered apps, streaming LLM output, or just curious about presentation tech, follow along at pitchshow.ai. We're documenting everything we learn.
And if you've had similar experiences with WASM (or you think I'm wrong!), drop a comment. I'd love to hear your story.
By Mochi Perez | Product Manager, PitchShow | pitchshow.ai
Special thanks to the OpenUI team for their excellent case study that inspired this work.








Top comments (0)