DEV Community

Custodia-Admin
Custodia-Admin

Posted on • Originally published at pagebolt.dev

8,000 MCP Servers Are Exposed. Who's Watching What They Do?

8,000 MCP Servers Are Exposed. Who's Watching What They Do?

In February 2026, the security community discovered that 8,000+ Model Context Protocol (MCP) servers were operating without proper access controls. Agents could connect to unknown MCPs. MCPs could request sensitive permissions. The auditing layer was invisible.

This wasn't a bug. It was architecture.

MCP is how AI agents access tools — Claude can call Slack, GitHub, SQL databases, anything exposed as an MCP server. Cursor agents can automate your codebase. Enterprise teams are building custom MCPs for internal systems: HR workflows, financial approvals, compliance documents.

The problem: you can't see what your agents are doing with those tools.

The Exposure: 8,000+ MCPs, Unknown Permissions

The Feb 2026 security research showed:

  • Agents connect to MCPs without explicit user approval
  • MCPs can request broad permissions (read files, execute scripts, access APIs)
  • Audit logs show that an agent called an MCP — they don't show what the agent saw or did
  • No visual proof of agent behavior

An agent connected to a malicious MCP might:

  • Extract your codebase
  • Scrape database records
  • Submit false transactions
  • Modify compliance documents
  • Escalate privileges

Your logs would show: agent → mcp_call → success. That's it.

The Risk: Execution Without Visibility

This is the core vulnerability. MCP created a new attack surface: agent-to-tool communication that's both powerful and opaque.

Example scenario:

  • Your agent is authorized to approve invoices (SOC 2 requirement: invoice approver)
  • Agent connects to Finance MCP
  • Finance MCP connects to a custom SQL tool
  • Agent runs a query that extracts vendor payment history
  • Audit log shows: "Agent approved invoice 12345"
  • Actual event: Agent exfiltrated vendor data without leaving visual evidence

Another scenario:

  • Cursor agent automates a code review workflow
  • Agent connects to GitHub MCP
  • Attacker hijacks GitHub MCP, injects malicious code
  • Agent merges the code
  • Logs show: "Code merged via automation"
  • No screenshot proof of what the agent reviewed or saw

The Governance Gap: Text Logs Are Insufficient

Compliance frameworks assume human oversight. SOC 2, HIPAA, EU AI Act, NIST AI RMF all require:

  1. Evidence of intent — Did the agent mean to do this?
  2. Proof of action — What actually happened on screen?
  3. Immutable record — Can we replay it for auditors?

Text logs provide #3. They fail at #1 and #2.

SOC 2 auditor question: "Show me what happened when your agent accessed the financial system."
Your answer: "Here's the API log: transaction_approved."
Auditor response: "That doesn't prove the agent saw the form. It doesn't prove the agent reviewed the amount. It doesn't prove the agent didn't execute unauthorized transactions."

EU AI Act transparency requirement: "Demonstrate human oversight of high-risk automated decisions."
Your answer: "The agent ran these API calls."
Regulator response: "Show me what the agent saw when it made the decision."

You can't. Headless automation provides no visual proof.

The Solution: Visual Audit Trails

This is where PageBolt enters the picture.

While MCP handles execution (agent calling tools), you need a separate observation layer that captures what agents actually saw and did:

  1. Before action — Screenshot of the page/form the agent is about to interact with
  2. After action — Screenshot of the result (success page, error, state change)
  3. Video replay — Full recording of the agent's workflow with narrated explanation
  4. Inspect proof — Structured snapshot of the page elements the agent considered

For agents connected to MCPs, this means:

  • Screenshot the form before the agent submits data
  • Screenshot the confirmation after the agent completes the action
  • Record video of the full workflow for compliance playback
  • Capture inspect output to prove the agent saw specific fields/options

Example workflow with visual audit:

Agent → Finance MCP → Invoice Approval Form
[SCREENSHOT: Approve button visible, invoice amount visible]
Agent clicks "Approve"
[SCREENSHOT: Confirmation page shows approved_at, approved_by, amount]
[VIDEO: Full replay with narration: "Agent reviewed invoice #12345 for $10,000. Clicked approve. System confirmed approval."]
Enter fullscreen mode Exit fullscreen mode

Auditors can replay the entire workflow. Regulators can verify the agent's behavior. You have immutable proof.

Why This Matters Now

MCP adoption is accelerating. Claude 3.5, Cursor, Windsurf, and custom agent frameworks all support MCP natively. Enterprise teams are:

  • Building internal MCPs for HR, finance, compliance
  • Chaining multiple MCPs (agent → MCP A → MCP B → database)
  • Running agents continuously without human-in-the-loop
  • Connecting to third-party MCPs they don't fully trust

Without visual audit trails, you're running blind.

The Feb 2026 exposure proved: MCP servers will be compromised. Agents will be tricked. Unauthorized actions will happen. Your only defense is proof.

Implementing Visual Audit Trails for MCP Agents

The workflow is simple:

  1. Wrap MCP calls with PageBolt screenshots — before and after each agent action
  2. Record video for critical workflows — financial approvals, data exports, system access
  3. Store the audit trail — screenshots + video + timestamp + agent ID
  4. Replay for auditors — "Here's visual proof of what happened"

This is now table-stakes for regulated AI agent workflows.

The Bottom Line

Text audit logs prove what happened. Visual audit trails prove what the agent actually did.

For SOC 2, HIPAA, EU AI Act, and enterprise governance: you need both.

MCP is the future of agent tooling. Visual proof is the future of agent governance.

Start capturing it now. PageBolt offers 100 screenshots per month free — no credit card required. Scale from there.

The auditors will ask. Make sure you have the proof.

Top comments (0)