DEV Community

Cover image for Talking to My Code: React Compiler Saves the Day
Anju Karanji
Anju Karanji

Posted on

Talking to My Code: React Compiler Saves the Day

Yo! tips imaginary cowboy hat

Welcome to yet another AI chatbot tutorial.
Yes, yes - groans reverberated all across the internet. But this one's different. Why you ask? Because it’s my very first tutorial blog. And you know what they say: you never forget your first… 😉

From Rejection to Experiment

Picture this: I’d just ended a long drawn-out interview battle. After 3 months of interviews, the rejection email landed in my inbox with all the charm of an avoidant boyfriend — sweet talk during the interview dates, ghosted when it came time to commit.

Ouch.

Here’s the upside, though - the take-home assignment planted an idea. The task was to build a “mini” version of their visual page builder: the usual drag-and-drop editor hooked up to an iframe preview. Standard fare.

Except while writing the jest tests, I couldn’t stop thinking about Cypress. You know how Cypress runs tests in a little Broadway show right in front of you? And then it hit me: what if building the page wasn’t about dragging or typing… but talking?

Cue lightbulb moment. 💡

Which brings us here — my first tutorial blog. You might groan, I might stumble, but lol, life’s short anyway.

I won't bore you with the standard page builder setup (like those lazy "you look nice" compliments) - it's the usual React component tree managing an iframe preview. You can check the repo if you're curious about those details. Let's jump to where it gets interesting: making it listen.



Making the Page Builder Listen

I wanted to build something that would actually listen - revolutionary concept, I know. Enter the Web Speech API.

At first, everything seemed perfect. Speech recognition was working, my page builder was responding in real time. I thought, "Finally, a relationship that works!"

Cool, right? Except my laptop started heating up like it was single-handedly handling every therapy session on ChatGPT across the continental U.S.

Naturally, I assumed: "Aha, speech recognition is melting my CPU.". Should I break-up with him now?!
Turns out… nope. 🙃

The Gaslighting Phase

I was blaming speech recognition, but in reality React was gaslighting me the whole time.

Debugging this felt like trying to have 'the talk' with someone emotionally unavailable. Me: 'Why are you re-rendering constantly?' React: 'I don't know, I'm just not ready for that level of commitment.

What was actually frying my laptop? Functions recreated on every render, causing my poor useEffect hooks to spin like a dog with its tail.🌀

 const sendMessageToHost = (message: Message) => {
    if (iframeRef.current?.contentWindow) {
      iframeRef.current.contentWindow.postMessage(
        message,
        "http://localhost:3000"
      );
    }
  };
  useEffect(() => {
    sendMessageToHost({
      type: MESSAGE_TYPES.UPDATE_COMPONENTS,
      components: debouncedComponentTree,
    });
  }, [debouncedComponentTree, sendMessageToHost]);
Enter fullscreen mode Exit fullscreen mode

React and it's mind games! Normally, you sprinkle in a little useCallback, and if you miss a dependency, you hope Github Copilot will remind you, and move on. But this time, I thought:

What if I let the new React Compiler do the heavy lifting?

React Compiler to the Rescue

Like finally dating someone emotionally mature after a string of disasters. So I turned on the experimental compiler in next.config.js:

// next.config.js
const nextConfig = {
  experimental: {
    reactCompiler: true
  }
};
Enter fullscreen mode Exit fullscreen mode

I deleted all my manual memoization.

The results? My render times dropped from 20.8ms to 2.1ms. No useCallback, no headaches, and no pain in the butt manual optimizations. React compiler - my knight in shining armor!

The compiler looked at my sendMessageToHost function, saw it had no dependencies, and just… memoized it for me. Automatically.

It was like having a really good boyfriend who gets you Starbucks coffee before you ask.

What's Next

I had:

  • A page builder that actually listened (mind-blowing!)

  • A laptop that wasn't having meltdowns.

  • A React compiler doing what it promised.

Now I can focus on:

  • Parsing commands like “add a button with text ‘Click Me’”

  • Handling chaos like “make it blue” when there are five buttons

  • Mapping natural speech to component props without losing your mind

Basically, it’s less talking and more connecting.

"Thank U, Next" by Ariana Grande

Sometimes rejection really is redirection. That job I didn’t land? It pushed me to build something I wouldn’t have thought of otherwise.

And that’s the beauty of side projects: they aren’t born in brainstorming sessions — they’re sparked by detours, rejections, and little “what ifs.”

This one just happens to involve speech recognition, React’s new compiler, and an avoidant boyfriend cameo.

🎤 Mic drop. React Compiler: officially a keeper.

Top comments (0)