DEV Community

Custodia-Admin
Custodia-Admin

Posted on • Originally published at pagebolt.dev

MCP server security — why hosted browser automation creates a safer audit trail than self-hosted Puppeteer

MCP Server Security — Why Hosted Browser Automation Creates a Safer Audit Trail Than Self-Hosted Puppeteer

You're building AI agents that automate browser tasks. You have two options: self-hosted Puppeteer MCP, or a hosted API.

The security implications are not obvious. Let's break them down.

The self-hosted Puppeteer problem

Self-hosted Puppeteer gives your AI agent direct browser access:

// Your agent code
const browser = puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://app.example.com');
Enter fullscreen mode Exit fullscreen mode

This looks clean. But here's what's actually happening:

Direct access: Your agent has full browser control. Every cookie it touches, every page it visits, every screenshot it takes is handled locally on your infrastructure.

Credential exposure: If your agent runs in an untrusted environment (cloud, shared hardware, compromised container), anyone with access can intercept credentials, session tokens, and data the browser touches.

Infrastructure burden: You manage Chromium, memory, crashes, timeouts, cleanup. At scale, this becomes a DevOps nightmare.

Audit trail: None. When something goes wrong, you have local logs. You don't have centralized, tamper-proof records of what happened.

The security data

Security researchers evaluated 10,631 MCP tools across the ecosystem. 5,877 scored poorly on security criteria:

  • No rate limiting
  • No request logging
  • No access control boundaries
  • Direct filesystem exposure
  • Credential handling in plaintext

Self-hosted Puppeteer MCP has all of these problems by default.

The hosted alternative

A hosted browser automation API inverts the model:

// Your agent code
const response = await fetch('https://api.pagebolt.dev/screenshot', {
  method: 'POST',
  headers: { 'x-api-key': apiKey },
  body: JSON.stringify({ url: 'https://app.example.com' })
});
const screenshot = await response.blob();
Enter fullscreen mode Exit fullscreen mode

Your agent doesn't get a browser. It gets an API call.

What changes:

  1. Rate limiting is built in. The API enforces 10–100 calls/min per key. Brute force attacks fail immediately. Runaway agents get throttled.

  2. Audit trails are automatic. Every call is logged with timestamp, user, action, success/failure. You can query: "What did this API key do on March 2?" Compliance teams can audit it.

  3. Credentials stay isolated. Your agent passes cookies via headers, but the hosted service never logs or stores them. Session tokens don't leak into your infrastructure.

  4. Access boundaries are enforced. Your agent can't read local files. It can only call the screenshot API. A compromised agent is limited to taking screenshots, not pivoting into your infrastructure.

Real comparison: Puppeteer vs Hosted

Scenario: Your AI agent processes customer data and takes screenshots for compliance.

Self-hosted Puppeteer:

Agent runs with direct browser access
Agent navigates to /customer/123/dashboard
Agent takes screenshot
Agent navigates to /admin/settings  ← oops, it can reach this
Agent extracts API keys from page
Attacker now has credentials
Enter fullscreen mode Exit fullscreen mode

You have local logs. You discovered the breach 3 days later.

Hosted API:

Agent calls POST /screenshot?url=/customer/123/dashboard
API enforces rate limit: OK (under 100/min)
API logs call with timestamp, agent ID, URL
API returns screenshot
Agent calls POST /screenshot?url=/admin/settings
API logs call
You review audit trail 5 minutes later
You see the suspicious call immediately
You revoke the API key
Attacker's access terminated
Enter fullscreen mode Exit fullscreen mode

The tradeoff is real

Self-hosted Puppeteer gives you code visibility and full control. You can audit the Puppeteer source code. You own all the data locally.

Hosted APIs trade some control for:

  • Instant attack mitigation (revoke API key, not a compromised agent)
  • Rate limiting (brute force protection out of the box)
  • Audit trails (compliance, incident response, forensics)
  • Zero infrastructure management (no Chromium crashes, no memory leaks)

When self-hosted makes sense

If your AI agents run in a fully trusted environment (your own machines, your own data center, air-gapped network), self-hosted Puppeteer is fine.

If your agents run anywhere else — cloud, shared infrastructure, customer devices, untrusted containers — a hosted API with audit trails is safer.

The numbers

  • 5,877/10,631 MCP tools score poorly on security
  • Self-hosted Puppeteer: 0 audit trails by default
  • Hosted API: Automatic logging, rate limiting, access boundaries

The security model isn't a nice-to-have. It's fundamental.

Getting started

PageBolt's hosted model is simple: call an API, get a screenshot (or PDF, or video). All calls are logged, rate-limited, and auditable.

Free tier: 100 requests/month.

Get started at https://pagebolt.dev


If you're evaluating MCP tools for production AI workflows, the security model matters as much as the capability. Hosted with audit trails beats self-hosted without them.

Top comments (0)