DEV Community

AI, please stop guessing a.k.a. Chrome DevTools MCP

Intro

We used to think coding was the most difficult part of an engineer’s job. Almost every engineer I know enjoys sharing tales of spending hours debugging, wrestling with stubborn syntax, or churning out countless lines of code to meet a deadline. However, our field has changed drastically in recent months.

Coding is no longer the main challenge. With AI tools like Cursor, Copilot, or ChatGPT, generating code is fast — sometimes too fast. The true bottleneck has shifted elsewhere: to understanding what the application is actually doing, reviewing code, and debugging code generated by AI agents.

Let’s be honest: most of us aren’t working on shiny, clean greenfield demos. We work on real systems used by millions of people. We can’t simply rewrite the entire system overnight, our documentation is rarely perfect, and sometimes we have to maintain that one module built a decade ago. There is always something more pressing on the roadmap than finally replacing legacy components.

This fact remains unchanged by AI: programmers spend significantly more time reading, extending, and fixing existing code than they do generating brand new code. Can AI assist with this, too? Yes, but…

Context is a key

AI is great at generating code but struggles with accurately interpreting reality. Without specific context, it often offers generalized, sometimes irrelevant, advice, similar to an uncle at family dinner who knows everything about politics, economy and every other aspect of life. For instance, when asked about performance issues, AI correctly suggests theoretical fixes like caching or checking slow requests, but often what engineers actually need is detailed, actionable data — such as which specific request is slow or which image is oversized, not general knowledge we all typically have.

To make AI truly valuable in daily engineering work, it must be grounded in reality by having access to real-time data:

  • The view from the browser.
  • The actual network responses.
  • Errors reported in the console.
  • Real-world user application behavior.

Fortunately, a solution exists: the Chrome DevTools MCP, which is designed to help.

Definitions

The Model Context Protocol (MCP) acts as a “universal connector,” similar to a USB-C for AI. It allows AI agents to “plug into” actual tools and retrieve structured data. This represents a significant transformation: instead of speculating, “hallucinating,” or requiring numerous follow-up questions, the AI can fetch concrete data, inspect systems, and validate its own assumptions by themselves, shortening the feedback loop.

Chrome DevTools is a well-known set of programming tools available in the Chrome browser. It’s totally free and developed by Google engineers. And what’s great — since September 2025 it has MCP, which means you can connect your AI agent to those powerful features and provide a way for AI to get real data about your app. All you have to do, is to add below snippet to your mcp.json file:

{
  "mcpServers": {
    "chrome-devtools": {
      "command": "npx",
      "args": ["-y", "chrome-devtools-mcp@latest"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

What can it actually do?

The whole list of tools is available on the github repository but tools can be grouped into below sections:

  • Input automation
  • Navigation automation
  • Emulation
  • Performance
  • Network
  • Debugging

So it can click over the page, fill some forms, emulate different color scheme, cpu throttling, network conditions, different viewport sizes, run performance analyzes (including lighthouse audits), take memory snapshots, list all network requests, get console errors, take screenshots, accessibility snapshots and much much more!

Here are a few practical examples (aka: things that save your sanity):

  • Take screenshots — AI can capture the current UI state. Why does it matter? No more “it looks broken” descriptions, easier debugging of visual issues, and it is useful for regression checks. Especially helpful when something is misaligned, invisible, or “just slightly off” (the worst kind of bug).
  • Navigate the app — AI can open pages, click things, and simulate user flows. You don’t have to manually reproduce every bug; you can test user journeys faster or even speed up your testing by asking an AI agent to check all user flows without needing to specify all possible paths.
  • Inspect network requests — AI can analyze real status codes, payloads, and timing of API calls to, for example, instantly spot failed requests, detect slow endpoints, or debug CORS issues without guessing. As Chrome DevTools can also read console messages and take snapshots, it’s really useful when something is not visible in the UI or it’s presented incorrectly.
  • Analyze performance — AI can look into performance traces of slow renders, blocking scripts, and heavy components, so you can easily find real bottlenecks and improve performance based on real data, not theoretical assumptions. It can also start a Lighthouse audit before and after a fix to ensure that the proposed change really helped. Personally, it’s my favorite feature from Chrome DevTools — playing with performance is always tricky, and allowing AI agents self-healing based on immediate feedback is much more efficient and less annoying than checking things manually and providing those data back to the agent chat.
  • Connect to a running Chrome instance — What’s more, you can also connect it to a running Chrome instance (by default it’s opening a new one), so e.g. you can check some tricky case you started to debug manually or avoid problems with trying to automate login. You can find instructions on how to do it here: https://github.com/ChromeDevTools/chrome-devtools-mcp/?tab=readme-ov-file#connecting-to-a-running-chrome-instance

Of course there are many many more cases where it’s useful. But the pattern and idea are always the same. By connecting AI to DevTools via MCP, we give it eyes, so it can see what’s happening and hands, so it can interact with the app. And that changes everything.

Why do we need it?

Instead of playing ping-pong with an agent manually verifying their changes, you can delegate the problem to the agent, let them automatically check if the problem was solved, and auto-heal if needed. This provides much shorter feedback loops and much better results. What’s nice is that it also reduces the problem of starting the agent, going for a coffee, and… discovering an hour later that nothing happened as the agent asked for additional data and I forgot about the request (which, unfortunately, is often the case for me).

So what changes for us? Thanks to having smarter and less needy agents, we can reduce time spent on chasing typos, manually debugging requests, reproducing bugs or switching between tools. And we can have more time for things which are really important like architecture decisions, improving user experience and solving real users’ problems.

Or in simpler terms:

Let AI take our jobs (but only the boring parts). So we have time for things that really matter.

Top comments (0)