DEV Community

Cover image for Chrome DevTools Protocol + Claude Code: The Pattern Open Source Teams Spent Years On
Phil Rentier Digital
Phil Rentier Digital

Posted on • Originally published at rentierdigital.xyz

Chrome DevTools Protocol + Claude Code: The Pattern Open Source Teams Spent Years On

I'll admit it, curiosity is both a flaw and a feature with me.

Every app you use daily can do more than what they show you. There's often beta features hidden behind a flag, undocumented endpoints, running, responding quietly.

I pointed Claude Code at a local Ghost instance with Chrome DevTools as an MCP server. In one afternoon, the agent found 27 endpoints the official documentation mentions nowhere. Detailed member stats, a full audit log, a database export in a single call. All of that was there. From the start.

I'm talking about Ghost here, but what matters is the method works on any app (including commercial ones...). And what used to take weeks for the obsessives behind yt-dlp, a solo dev with an agent now does between coffee and lunch.

TL;DR: AI agents + Chrome DevTools turn internal API reverse engineering into a reproducible one-shot. 27 undocumented endpoints found on Ghost in one afternoon, typed wrapper + tests included. The method works on any tool. Here's how.

Fair warning: experiments on open-source software I run locally. Proprietary tools? Check the TOS first. I'm sharing a method, not legal advice.

The Documentation Is the Children's Menu

You're at a restaurant and they hand you the children's menu. Six items, big fonts, pictures of happy chickens. Meanwhile the kitchen runs a full 40-item carte with stuff you'd actually want to order. Nobody's hiding it from you. They just figured you wouldn't ask.

That's what application documentation is. A curated selection, not a technical inventory.

Three forces keep it that way. Features that aren't "ready" for public consumption but already work internally (the admin UI uses them, you just can't). Capabilities gated behind premium tiers that technically respond to anyone who hits the right endpoint. And stuff the team built for their own operations and never bothered to document because it wasn't meant for you.

To be fair to vendors, there's a solid reason for this. Every endpoint you put in the docs becomes an implicit stability contract. Break it, and a thousand developers open a GitHub issue before you finish your morning coffee. So teams document the minimum viable surface and move on. That's not malicious. It's just expensive to do otherwise.

The docs show you what they want you to use. Not what the app can do.

The Obsessives Who Came Before Us

Before agents entered the picture, reverse engineering internal APIs was already a thing. A glorious, painful, time-consuming thing.

Consider yt-dlp. Hundreds of contributors maintaining one piece of software whose entire purpose is to understand YouTube's internal API. Every time Google changes something (which is constantly, sometimes seemingly out of spite), someone has to figure out the new flow, patch it, ship it. It works. But it's also a full-time project for a small army of volunteers.

Then there was Nitter. A beautiful alternative Twitter frontend built entirely on reverse-engineered endpoints. Worked great, until Elon locked the APIs and it was finished. Years of work, gone in a policy change. Remember that one, it comes back later.

These projects proved something important: the undocumented capabilities are real, useful, and people will build remarkable things on top of them. But the cost was absurd. Weeks of manual traffic inspection. Deep protocol expertise. Constant maintenance against moving targets. It was a sport for the obsessive (I remember debugging games in ASM on Amstrad CPC, so I get the appeal, but still).

yt-dlp has hundreds of contributors to maintain a single reverse engineering effort. I needed zero.

An AI Agent Just Collapsed Weeks Into Minutes

The fundamental shift is not about better tooling. It's about who does the exploration work.

Here's the technical setup. Chrome DevTools Protocol (CDP) exposes everything the browser knows: DOM tree, network requests, console output, performance metrics. Normally you interact with it through the DevTools GUI or via Puppeteer-style automation. An MCP server wraps CDP into a protocol that AI agents speak natively. The agent gets three capabilities that matter here: javascript_tool (execute arbitrary JS in the page context, including fetch() calls with the active session cookies), computer (wait, click, navigate), and access to the full network waterfall.

That combination is what changes the game. The agent doesn't just read about APIs. It makes live calls inside an authenticated session, inspects the responses, and iterates. All the things you'd do manually with the Network tab open, except the agent does it systematically and doesn't get bored after endpoint number seven.

Google shipping Chrome DevTools as an official MCP server in March 2026 is what makes this not a hack but a supported workflow. The company that builds the browser decided that giving AI agents live access to the DOM, the network tab, and the console was worth maintaining. That's an industry signal, not a community experiment.

Before this, agents were essentially blind to runtime behavior. They could read documentation, generate code, call known APIs. But they couldn't watch what an application actually does on the wire. Now they can. The agent reads the traffic, not the docs. On open source, that's your fundamental right. On proprietary software, your TOS mileage varies, which is why the disclaimer up there exists.

Google just made the pattern official. Agents read network traffic now, not documentation.

Ghost, 27 Endpoints, One Afternoon

The setup: Ghost v6.22.0 running locally, Claude Code with Chrome DevTools MCP connected to the admin panel at localhost:2368/ghost/.

The first prompt wasn't "go explore" (that would be vibe coding). It was structured: intercept all admin panel requests, catalog unique endpoints by path and HTTP method, record response shapes, then systematically probe adjacent URL patterns. The agent used javascript_tool to inject fetch() calls directly in the admin page context, which meant it inherited the active session cookies and admin-level permissions. No separate authentication dance needed.

Phase 1: passive interception. While I navigated through the Ghost admin (dashboard, posts, members, settings), the agent recorded every API call the frontend made. Thirteen live endpoints surfaced immediately. These are the ones the admin UI actually uses but that the official API docs don't mention.

Phase 2: active probing. This is where it gets interesting. The agent took the URL patterns it had already seen (/ghost/api/admin/stats/..., /ghost/api/admin/actions/...) and started probing variations. It tried adjacent routes, different query parameters, the plural and singular forms of what it already knew. It fetched the official Ghost Admin API docs and the Content API docs in parallel, then computed the delta between what's documented and what actually responds with a 200. By the end: 27 endpoints total, all returning valid data.

Phase 3: wrapper construction. The agent generated an 830-line TypeScript wrapper (ghost-enhanced-api.ts) with two clients. A GhostOfficialClient that wraps the documented Admin API (your baseline), and a GhostEnhancedClient that adds every undocumented endpoint found. Strict TypeScript interfaces for every response shape, because when you're working with endpoints that have no documentation, types are your documentation.

Authentication was interesting too. Ghost's Admin API uses JWT signed with HMAC-SHA256, derived from a hex-encoded API key split at position 24 (the first half is the key ID, the second is the secret). The agent figured this out from observing the admin panel's own auth headers and implemented it with crypto.subtle in the wrapper. No documentation consulted for that part.

What the agent found, in concrete terms:

Stats endpoints (8 total)stats/member_count/, stats/mrr/, stats/subscriptions/, stats/referrers/ with conversion tracking, stats/top-posts-views/. Ghost runs an entire analytics backend that the official docs pretend doesn't exist. MRR broken down by currency, referrer attribution with conversion rates, daily member growth. This is the kind of data you'd normally need a third-party analytics tool to get.

Audit logactions/ endpoint. Complete journal of every admin operation: who changed what setting, who published which post, when. Full action_type, resource_type, actor fields. The sort of feature that's usually "Enterprise tier, contact sales."

Email system — three separate endpoint groups: emails/ (delivery stats per email), links/ (click tracking), automated_emails/ (newsletter automation metrics). Independent from post endpoints, meaning you can query email performance without going through the posts API.

Database exportGET /ghost/api/admin/db/ returns a full JSON backup. One call. (And its mirror, POST /ghost/api/admin/db/, does a destructive import. That one goes in the "don't touch" category for obvious reasons.)

Also discovered: mentions/ (Webmentions/ActivityPub), recommendations/ and incoming_recommendations/ (the recommendation engine), snippets/, labels/, roles/, and full server config.

The test suite (40 tests) passed 39/40 on first run. The one failure was a response key mismatch: incoming_recommendations/ returns its data under a recommendations key, not incoming_recommendations. Exactly the kind of inconsistency that only shows up when you actually hit the endpoint and look at what comes back. Fix was one line. 40/40.

I've already seen Claude Code absorb an entire open-source tool and become more competent than its own documentation. Same energy here, applied to API surfaces nobody had mapped.

Classification: 22 endpoints safe (read-only), 9 use-with-caution (write operations), 1 don't-touch (POST /db/, the destructive import). And the non-negotiable part: undocumented endpoints carry zero stability contract. They change between versions without a changelog entry. A health check on every endpoint is the first thing you build, before anything else.

Total agent time: under 17 minutes. The rest of the afternoon was me reading the rapport and deciding what to build on top of it.

27 endpoints. Zero documentation. One afternoon. Reproducible on any tool you run.

"iceberg" schema — visible part above waterline = Ghost official Admin API endpo...


"iceberg" schema — visible part above waterline = Ghost official Admin API endpo...

What This Unlocks (And Why It Matters Now)

Custom MCP servers on any tool. You discover the endpoints, wrap them in typed clients, expose them to your agents via MCP (or a CLI, your call). Your agent can now operate inside apps that have zero official agent support. The MCP ecosystem has thousands of community servers already, but most of them build on documented endpoints only. This goes one layer deeper.

Agent pipelines that don't wait for the vendor. Need Ghost to push member stats into your monitoring dashboard every morning? The official API doesn't support it. The undocumented stats/ endpoints do. You write a cron job that calls getStatsReferrers() and pipes the data wherever you want. You're no longer blocked by what someone else decided to prioritize this quarter.

Custom extensions the UI never imagined. Combine the audit log with the email tracking endpoints to build an internal compliance dashboard Ghost will probably never ship. Bridge two tools through their internal endpoints to automate a workflow that would require three browser tabs and manual copy-paste otherwise. The sort of thing that used to require "enterprise tier, please contact sales."

Now, the choice between CLI and MCP for connecting agents to tools is an active debate with real performance tradeoffs. Both work. The point is you need something to expose first. Discovery comes before packaging.

The vendor roadmap is someone else's priority list. Your agent doesn't need to be on it.

The Rules of Engagement

Open source first. Always. On open-source software you're reading code that's publicly available, there is no gray zone, full stop. On proprietary tools the situation gets murkier fast, and the TOS might have opinions very strong about automated access. Start with open source, get comfortable with the method, then make informed decisions about where else you take it.

Health checks are not optional, and I mean structurally not optional. Undocumented endpoints have no stability guarantee. Version 5.92 might expose an endpoint that 5.93 removes without even a changelog entry. Your wrapper needs to detect breakage before it corrupts anything. Every endpoint gets a health check. Every wrapper gets a test suite. This is the boring part, and also the part without which nothing holds.

And the one rule I'd tattoo somewhere visible: never build a SaaS on internal endpoints. Personal use, internal tooling, automations for your own stack, go wild. But the moment you sell a product that depends on an undocumented endpoint, you're building on sand. Nitter learned this the hard way 🫠. One upstream policy change and the project was dead. Keep the exploration for yourself.

The approach itself demands structure too. Pointing an agent at a network tab without clear constraints technically works, but produces garbage at scale. Exploring internal APIs with an agent demands the same rigor as any production work. Prompt contracts, explicit boundaries, defined output formats. Not vibe coding.

Explore everything. Build what you need. But never sell a product on an endpoint without a contract.


For years, we used our tools like good students. The docs said "you can do this," and we said OK. Period.

That silent agreement just broke. An agent + DevTools explores the real capabilities of any application in 30 minutes. The reverse engineering that took yt-dlp hundreds of contributors and years of maintenance became a one-shot for any solo dev on a random Tuesday afternoon.

The official documentation is the brochure. The source code is the contract. And now you have an agent that reads both ;-)


Sources & links:

Google Chrome DevTools MCP — official release, March 2026

If you're a dev shipping real things with AI agents, this is what I write about. Subscribe and you'll get the methods before they become Medium trends.

(*) The cover is AI-generated. The 27 endpoints, however, are very much real and slightly offended they were never documented.

Top comments (0)