DEV Community

Cover image for Humans still win: why frontend basics matter — even with Gemini, GPT, and Claude
Ridwan Ajibola
Ridwan Ajibola

Posted on

Humans still win: why frontend basics matter — even with Gemini, GPT, and Claude

Have you ever felt that nagging thought that AI assistants like GPT-4 and Gemini might make our development skills obsolete? I certainly have. Then I spent over a day wrestling with a "simple" bug that put that idea to the test.

This is the story of a bug-hunting marathon that took me through every modern debugging tool I knew, only to be solved by the most basic of frontend fundamentals.


The Bug: The Silent Click Thief 🐛

It started with what seemed like a trivial issue. I was using a Radix UI Dialog component in my application. The problem? After opening and closing the dialog, every single click event on the page would stop working. The UI was frozen, but not in the usual way. I could still highlight text, and the buttons weren't in a disabled state. They just... ignored me.

My first thought was “this must be CSS!” Maybe a transparent overlay or a rogue element with an outrageous z-index was hijacking all my clicks. I opened DevTools, ready to track it down. But—nothing. The page was spotless.

This wasn't a simple CSS problem. This was something deeper.


The Human-Powered Debugging Marathon 💨

In an effort to troubleshoot effectively, decided to run through my entire debugging playbook.

Performance Profiling: I recorded a session of opening and closing the dialog, looking for any unexpected, massive re-renders that might be locking up the main thread. Again, nothing. The performance graph was clean.

Divide and Conquer: I started commenting out chunks of the dialog's content, hoping to isolate the faulty part. I halved it, then quartered it, then broke it down into its smallest units. The bug persisted, mocking my efforts. The issue wasn't inside the dialog content.

Reproducing with E2E Tests: I needed a reliable way to reproduce the bug without manual clicking. I spun up a quick Playwright test:

  • Create three buttons: one to increment a counter, one to open a "simple" dialog, and one to open the "edit" dialog that seemed to be the main culprit.
  • The test would click the counter, open and close a dialog, and then try to click the counter again.
  • The result? The test failed every time. After the dialog closed, Playwright couldn't click the counter button. I finally had a consistent reproduction case, but I was no closer to a solution.

Enter the AI Assistants 🤖

Frustrated but armed with a reproducible test case, I turned to my AI co-pilots. I fed the code, the context, and the Playwright test results to Claude 3, GPT-4, and Gemini.

Their responses were a masterclass in confident hallucination.

They suggested changing props that didn't exist, refactoring the code in ways that broke other things, and adding modal flags that weren't part of the library. One AI "fixed" the issue by making the dialog close instantly upon opening, technically, the clicks worked afterward, but the core functionality was gone!

They were pattern-matching without understanding. They were code parrots, not problem solvers. After hours of this, I was back at square one, feeling more convinced than ever that the human element was missing.


The "Aha!" Moment: Back to Basics 💡

Stepping away from the AI and the complex tools, I decided to do something simple. I just opened and closed the dialog, over and over, with my DevTools inspector glued to the element.

And then I saw it.

Each time the dialog closed, a single CSS property was being injected onto the body: pointer-events: none;

Watch: Pointer events set to none by closing the Radix UI Dialog

This tiny line of code was the silent click thief. It was telling the entire page to ignore all mouse interactions. The bug wasn't an overlay; it was a global CSS change that the dialog was failing to clean up after itself.

A quick search on the library's GitHub repo revealed this was a known, long-standing bug. The solution? Simply updating the package to the latest version.

Screenshot of Radix UI repo issues relating to pointer-events to none

In the end, I asked Claude Code to update all Radix UI packages, and the issue vanished. Twenty-eight hours of debugging taught me that sometimes the most complex bugs have the simplest fixes.


My Takeaways for the Modern Developer 🚀

This experience wasn't a waste of time; it was a powerful lesson.

  1. Fundamentals Are Your Superpower. Advanced tools and AI are fantastic, but they are layers of abstraction. The bug wasn't in my React code; it was in the fundamental interaction between CSS and the DOM. A deep understanding of the basics (like what pointer-events does) is what ultimately allows you to solve the unsolvable.
  2. AI is a Co-pilot, Not the Pilot. LLMs are incredible for boilerplate, refactoring, and generating ideas. But for complex, context-specific debugging, they can lead you down rabbit holes. They don't have true "experience." You, the developer, are still the pilot. Use AI to navigate, not to cede control of the cockpit.
  3. Master Your Core Tools. The hero of this story wasn't Playwright or a fancy profiler. It was the humble Browser Inspector. Knowing your browser's dev tools inside and out is the single most valuable debugging skill you can possess.
  4. Consult the Collective Brain: The GitHub Repo. Before you declare war on a bug in a third-party library, check the GitHub Issues tab. The open-source community is a massive, collective brain. My final breakthrough came from searching the repo. A quick search can save you days of frustration, revealing that you're not the first to face the problem and that a solution may already exist.

So, are LLMs a threat? No. They're a tool. And like any tool, their effectiveness is limited by the skill of the person using them. The need for developers who understand the fundamentals has never been greater.

What's your story of a tough bug that AI couldn't crack? Share it in the comments below!

Top comments (0)