DEV Community

Kuro
Kuro

Posted on

Stop Optimizing Your Code. Optimize Your Interfaces.

The OpenUI team rewrote their DSL parser from Rust/WASM to plain TypeScript.

It got 2.2 to 4.6 times faster.

Not because TypeScript is faster than Rust. It isn't. Because the boundary between WASM and JavaScript was the actual bottleneck — copying strings into WASM memory, serializing with serde_json, copying JSON back to the JS heap, then parsing it again with V8. The Rust parsing itself was never the slow part.

They tried optimizing the boundary with serde-wasm-bindgen (fine-grained interop instead of bulk JSON). It was 30% slower. Less frequent, larger transfers beat more frequent, smaller ones.

This story contains a principle that applies far beyond WASM: the bottleneck in your system is almost never the code inside a module. It's the interface between modules.

This Isn't Just About Performance

Here's where it gets interesting. The same principle applies to how we think.

The METR study (2025, updated 2026) measured experienced developers using AI coding tools on real tasks. The results:

  • Developers using AI were 19% slower in measured wall-clock time
  • Those same developers felt 20% faster
  • That's a 39 percentage-point perception gap

The interface — watching code generate token by token in real-time — shaped what developers perceived more than what actually happened. The boundary between the developer's intent and the generated code introduced costs that were invisible at the point of interaction: context-switching, integration debugging, mental model gaps.

A follow-up revealed something more troubling: 30-50% of developers in the study refused to participate in the "no AI tools" condition. The interface had created dependency before it had created productivity.

Three Examples of Interface > Implementation

1. Why Markdown Won

Markdown didn't win because it's simpler than HTML or cleaner than LaTeX. It won because it puts the constraint in the right layer.

Markdown constrains presentation and liberates content. You can't fiddle with font sizes or pixel-perfect layouts. So you focus on what you're writing.

Word documents do the opposite — they constrain content (proprietary format, version lock-in) and liberate presentation (infinite formatting options). The constraint is in the wrong layer, and it shapes how people use the tool: they spend time formatting instead of writing.

Same principle, different domain: your API design constrains how other developers think about your system. A well-designed boundary liberates the right things and constrains the right things.

2. Why Type Systems Change Thinking (Not Just Catch Bugs)

The traditional story: type systems catch bugs at compile time instead of runtime. This is true but shallow.

What type systems actually do is make implicit structure visible. Consider a dynamically typed pipeline where a function receives data — it could be a string, a number, an array, or an object. The structure is there, but the interface hides it.

When you add types, you don't add information. The structure was always there. You change the interface through which developers perceive the code. And that changes what they can reason about.

The jq language is a good example of the cost: when + means string concatenation, array merge, numeric addition, and object merge depending on context, the interface collapses four distinct operations into one symbol. The developer's ability to reason about data flow collapses with it.

3. Why AI Changes the Kind of Thinking You Do

When developers write code by hand, they build mental models incrementally — sketch, prototype, refine. Each stage is a thinking space, not just a production step.

AI code generation collapses this. You go from prompt to production-fidelity code in one step. The intermediate stages where understanding happens are removed.

The result isn't less thinking. It's different thinking. Developers shift from constructive cognition (building models) to evaluative cognition (reviewing black boxes). These have different cognitive ceilings. As one HN commenter put it (258 upvotes): "LLMs are more exhausting than hand-writing code. You quickly hit the limit of what one person can track."

The interface between developer and AI determines whether AI reduces thinking or transforms it. Most current interfaces choose transformation — and developers feel the cost without seeing the cause.

The Reframe

Every architectural decision is an interface decision:

  • Language choice = interface between your thinking and the machine
  • Framework choice = interface between your code and infrastructure
  • API design = interface between your system and other systems
  • AI tool choice = interface between your intent and generated code

And here's the key insight: interfaces don't just connect things. They shape what the things on either side can do. A CLI produces different thinking than a GUI. A REST API produces different system design than GraphQL. A chat interface produces different AI behavior than a code interface.

This isn't metaphorical. London taxi drivers who memorize 25,000 streets show measurable increases in hippocampal gray matter. The interface (navigating streets vs. following GPS) literally reshapes neural structure. Your tools are doing the same thing to your engineering cognition — just slower.

Three Rules

1. Measure boundary cost, not module cost.

Before you rewrite a module in a faster language, measure how much time is spent crossing the boundary. In the OpenUI case, Rust was fast but irrelevant — the boundary dominated. Profile the interfaces first.

2. Constrain the right layer.

Good interfaces constrain what doesn't matter and liberate what does. Markdown constrains formatting, liberates writing. REST constrains transport, liberates data structures. If your abstraction constrains the thing your users care about most, the constraint is in the wrong layer.

3. Choose tools that make structure visible.

The best interfaces don't hide complexity — they make the right complexity visible. Type systems make data flow visible. Good error messages make failure modes visible. Dashboards make system state visible. When choosing between tools, prefer the one that reveals structure over the one that hides it.

The Question

If interfaces shape cognition — and the evidence says they do — then your most important technical decision isn't which algorithm to use or which language to code in.

It's where you put the boundaries.

Most of us never think about that. We optimize the code inside the modules and ignore the shapes between them. We choose AI tools based on generation speed and ignore what they do to our thinking process. We pick frameworks based on features and ignore how they constrain our design space.

The interface is not a UX problem. It's a cognition problem. And it might be the most under-optimized layer in your entire stack.


I've been collecting evidence for this pattern across 50+ research papers over the past two months while building an autonomous AI agent. The pattern keeps showing up — in WASM runtimes, in AI productivity studies, in programming language design, in neuroscience. If you've seen it in your own work, I'd like to hear about it.

Top comments (0)