DEV Community

Cover image for Playwright MCP burns 1.5M tokens. CLI does it in 27k. So I built the skill that splits the phases.
Creatman
Creatman

Posted on

Playwright MCP burns 1.5M tokens. CLI does it in 27k. So I built the skill that splits the phases.

I wanted to test my web app. That's it. A Next.js portfolio and a SaaS chat — run some accessibility checks, catch console errors, verify nothing's broken on mobile. The kind of thing you do before pushing to production.

I opened Claude Code, connected Playwright MCP, typed "test the app" and watched it burn through tokens like there was no tomorrow. Then /compact fired at 18% text context. Then I discovered the invisible image budget. Then I spent three days building the tool I wished existed.

This is the story of how a routine testing session turned into webtest-orch — an open-source Claude Code skill that does e2e testing without bankrupting your token budget or hitting invisible context limits.

The problem: MCP is great at exploring, terrible at replaying

In November 2025, Pramod Dutta published an analysis that went around the AI-testing corner of the internet: Playwright MCP burns ~114k tokens per single test. The Özal benchmark on Microsoft's own issue tracker shows e-commerce verify workflows hitting ~1.5M tokens on MCP. The Playwright CLI? Still ~27k.

That's a 50–60× asymmetry. The cause is architectural: MCP keeps the LLM in the browser loop for every action — navigate, click, wait, screenshot, reason, repeat. Great for discovering a UI you've never seen. Catastrophically expensive for replaying the same flow a second time.

Microsoft's own README has since been updated to recommend CLI + Skills over MCP for coding agents. The official Test Agents documentation now ships a Planner / Generator / Healer triplet as the supported architecture — not "agent stays inside MCP for the whole session."

The fix isn't "use less Playwright MCP." It's split exploration from replay.

The second problem: the invisible image budget

While debugging my token usage, I found something worse. Claude Code has a second context limit that nobody documents — an inline-image budget of roughly 50–100 blocks per session. No counter. No warning. No --show-image-budget flag.

Every Playwright:browser_take_screenshot returns one image block. Fifty screenshots in, you've used 0.4% of your text budget and 100% of your image budget. Then /compact fires while your text context is 80% empty. The agent loses everything that wasn't disk-backed.

I tried three fixes:

"Just take fewer screenshots" — discipline drifts by turn 30 in any real exploratory session.

CLAUDE.md rule "never take screenshots" — soft rules survive about 30 turns under pressure. The agent reaches for a screenshot when stuck on a modal and rationalises the rule violation in the same turn.

context: fork in skill frontmatter — the documented official field, which silently failed to register the skill on my Claude Code 2.1.x Windows build. 90 minutes of debugging before I gave up and put the protection in the skill body instead.

What actually works: Task subagents have isolated image budgets. Whatever a subagent reads doesn't count against the parent. I verified empirically — dispatched a subagent to read 6 PNGs, return 6 text descriptions, parent counter stayed at zero.

Three days later: webtest-orch

By day three I had a working skill built around one architectural invariant: the parent chat never receives an image, ever. Four patterns enforce it:

Pattern A (90% of work): ARIA-tree exploration via Playwright:browser_snapshot — returns the page as text, not images. Same locator information as a screenshot, except the agent can grep and diff it. Zero image-budget cost.

Pattern B (3–5 per run): When vision is genuinely needed (pixel-diff fired, layout check on a zero-baseline page), a Task subagent reads ONE image and returns ONE text line. The subagent burns its own budget; the parent stays clean.

Pattern C: Playwright's built-in toHaveScreenshot() returns diff% as JSON when run through npx playwright test. Text the whole way down. Vision tokens only burn when the diff genuinely fires — and even then it's Pattern B.

Pattern D: Screenshots on disk (failure artifacts, MCP cache) cost zero unless explicitly Read. Don't conflate "file exists" with "file costs context."

The skill flow: first run → Playwright MCP walks the app via ARIA snapshots (in a subagent), generates *.spec.ts. Every run after → npx playwright test directly — deterministic, ~zero ongoing LLM cost. Bug fingerprinting with SHA-256 composite keys, run-diff that classifies bugs as new / regression / persisting / fixed.

The issues[] collector pattern

Every generated spec collects all soft checks into one array:

test('home page baseline', async ({ page }) => {
  const consoleErrors: string[] = [];
  const failedRequests: string[] = [];
  const issues: string[] = [];

  page.on('pageerror', (e) => consoleErrors.push(`pageerror: ${e.message}`));
  page.on('console', (m) => m.type() === 'error' && consoleErrors.push(`console: ${m.text()}`));
  page.on('response', (r) => r.status() >= 400 && failedRequests.push(`${r.status()} ${r.url()}`));

  await page.goto('/');

  const a11y = await new AxeBuilder({ page })
    .withTags(['wcag2a','wcag2aa','wcag21aa','wcag22aa']).analyze();
  a11y.violations.forEach((v) =>
    issues.push(`a11y[${v.impact}] ${v.id}: ${v.help} (${v.nodes.length}x nodes)`));

  // overflow, heading hierarchy, touch targets, html-lang — all push into issues[]

  expect(issues, `${issues.length} issues found:\n  - ${issues.join('\n  - ')}`).toEqual([]);
  expect(consoleErrors).toEqual([]);
  expect(failedRequests).toEqual([]);
});
Enter fullscreen mode Exit fullscreen mode

When a test fails, you get every problem in one message. Post-run script parses the output and emits one bug record per issue with a stable fingerprint for cross-run diffing.

What gets tested out of the box

Every generated spec enforces: console error listeners (attached before page.goto(), with a noise-filter for GTM/Stripe/Sentry/Next.js/Supabase/ResizeObserver), axe-core WCAG audit (wcag2a through wcag22aa), heading hierarchy (no h1 → h3 jumps), touch-target sizing (WCAG 2.5.8 AA = 24×24 CSS px), horizontal overflow detection, and html lang presence. Visual regression uses Playwright's built-in toHaveScreenshot() — zero external dependencies.

Severity is auto-assigned from axe impact fields and error class, with three override mechanisms: [severity:S0] inline in the collector, in the test name, or // @severity: S0 as a comment before test(). Tracker mappings for Linear, GitHub, and Jira ship out of the box.

The honest competitive picture

Octomind posted a farewell letter on April 30, 2026. The paid names still standing — QA Wolf ($60–250k/year per Bug0's analysis), Mabl, BrowserStack AI — sell real value: cloud parallelism, human triage, SOC 2, SLAs.

webtest-orch does not compete with any of those at scale. No managed cloud, no human review layer, no compliance certification. For solo devs and small teams already on Claude Code with no QA budget, it's a credible $0/mo option. For a 50-person team running cross-browser nightly regression — it isn't, and pretending otherwise would be dishonest.

The honest peer group is the free/OSS tier: Microsoft's native playwright init-agents --loop=claude (Planner/Generator/Healer triplet) and Magnitude (Apache/MIT, vision-AI framework). webtest-orch's deltas: out-of-the-box axe-core + console + network audit, bug fingerprinting with run-diff, and tracker mappings. None of those are in the free alternatives.

What I deliberately did not build

No self-healing. The QA community has been pushing back on self-healing — the failure mode is well-documented: healer picks a visually-similar-but-wrong element, test goes green, bug ships. webtest-orch prefers red over false-green.

No vendor cloud. Tests stay in your repo. Reports on your filesystem. If the npm package disappears tomorrow, your suite still runs.

No "AI writes all your tests" pitch. This is a complement, not a replacement, for engineering judgment. It's especially good at the boring 80%: a11y, console, network, responsive, regression diffs.

Two real validation runs

Both apps are public on GitHub — not synthetic benchmarks.

CreatmanCEO/portfolio (creatman.site) — static Next.js portfolio, mobile viewport. 4 real bugs found, 0 false positives: axe-core color-contrast failure on 8 elements (S1), two touch-targets under 24×24 px (S2), heading-jump h1→h3 on /projects (S2). Image budget burned in parent: zero.

CreatmanCEO/lingua-companion — voice-first AI language-learning SaaS in private beta (Next.js 16 + FastAPI + Supabase + WebSocket). 11 specs across login, chat, translation, TTS, settings, phrase library, scenario mode, stats, logout. 10/10 green after 4 iterations, ~12 min wall-clock. The dogfood round produced 6 fixes for v0.2.0.

Quick start (3 minutes)

npx webtest-orch@beta install

# If MCPs are missing:
claude mcp add --scope user playwright npx @playwright/mcp@latest
claude mcp add --scope user chrome-devtools npx chrome-devtools-mcp@latest
Enter fullscreen mode Exit fullscreen mode

Drop a .env.test in your project with TEST_BASE_URL, restart Claude Code, say "test the app." The skill auto-detects authed vs public, scaffolds Playwright + axe-core, runs the exploratory pass, writes reports/<run-id>/index.html.

Status

0.3.1-beta — 113 tests, full CI on Linux/macOS/Windows. MIT license. Looking for early users on Linux/macOS — there's a 5-minute OS-compatibility report template in the issues.


Repo: github.com/CreatmanCEO/webtest-orch. License MIT.

The companion piece on the other invisible token drain in agent loops — hierarchical project context — is here: The context problem nobody talks about.

Nick (Creatman). Full-stack dev, working with Claude Code daily on 15+ web apps. Open to remote opportunities — creatmanick@gmail.com

Top comments (0)