This is a submission for the Notion MCP Challenge
What I Built
I spend most of my day inside Claude Desktop, Cursor, and VS Code. Each of those tools runs MCP servers in the background. Those servers have full access to my filesystem, environment variables, and network. I'd never actually audited what they were doing.
So I built a two-part system. First: mcp-scan, a CLI that scans every MCP server config on your machine and reports what it finds. Second: a Notion MCP integration that takes those findings and pushes them into a structured Notion database, turning a one-time terminal output into a tracked security backlog.
Running it on my own machine found 3 HIGH and 9 MEDIUM severity issues across 10 servers. One server had both filesystem access and outbound network calls (textbook exfiltration setup). Another was pulling from an unverified npm package outside the @modelcontextprotocol org. Neither of those was obvious from reading config files.
The pipeline:
-
npx mcp-scan@latest --jsonreads all your AI tool configs and outputs structured JSON - A Node.js bridge script reads that JSON from stdin and calls the Notion API
- Each finding becomes a database row: server name, severity, finding type, which AI tool is affected, the config path, and a fix recommendation
- You end up with a Notion page where you can filter by severity, assign findings to yourself, and mark things fixed
The Notion part matters because security findings in a terminal vanish. You close the window and forget. A Notion database doesn't forget, and it works with the rest of your project docs instead of living in some separate security tool.
Video Demo
Here's what the terminal output looks like when you run the full pipeline:
npx mcp-scan@latest --json | node notion-integration/push-to-notion.js
+ thynkq-router (Claude Code): exfiltration-vector [HIGH] -- created
+ thynkq-router (Claude Code): exfiltration-vector [MEDIUM] -- created
+ vercel (Claude Code): network-egress-unknown [MEDIUM] -- created
+ vercel (Claude Code): data-controls-sharing [LOW] -- created
+ playwright (Claude Code): unverified-source [HIGH] -- created
+ vercel (Gemini CLI): duplicate-server [MEDIUM] -- created
+ playwright (Gemini CLI): duplicate-server [MEDIUM] -- created
+ sequentialthinking (Claude Code): unverified-source [HIGH] -- created
Done: 8 created, 0 updated, 4 clean servers skipped.
Total findings tracked in Notion: 8
Each of those lines is a real finding from scanning my own machine right now.
Show us the code
GitHub repo: github.com/rodolfboctor/mcp-scan
The integration is in notion-integration/push-to-notion.js. Core logic:
import { Client } from "@notionhq/client";
async function main() {
const notion = new Client({ auth: process.env.NOTION_API_KEY });
const databaseId = process.env.NOTION_DATABASE_ID;
let raw = "";
for await (const chunk of process.stdin) raw += chunk;
const { results } = JSON.parse(raw);
for (const server of results) {
const { serverName, toolName, configPath, findings = [] } = server;
for (const finding of findings) {
const { id, severity, fixRecommendation } = finding;
// Idempotent: don't create duplicates on re-scan
const existing = await notion.databases.query({
database_id: databaseId,
filter: {
and: [
{ property: "Server", rich_text: { equals: serverName } },
{ property: "Finding ID", rich_text: { equals: id } },
{ property: "AI Tool", rich_text: { equals: toolName } },
],
},
});
const props = {
Name: { title: [{ text: { content: serverName + ": " + id } }] },
Server: { rich_text: [{ text: { content: serverName } }] },
"AI Tool": { rich_text: [{ text: { content: toolName } }] },
Severity: { select: { name: severity } },
"Finding ID": { rich_text: [{ text: { content: id } }] },
"Config Path": { rich_text: [{ text: { content: configPath || "" } }] },
"Fix": { rich_text: [{ text: { content: fixRecommendation || "" } }] },
"Scan Date": { date: { start: new Date().toISOString() } },
};
if (existing.results.length > 0) {
await notion.pages.update({ page_id: existing.results[0].id, properties: props });
console.log(" ~ " + serverName + " (" + toolName + "): " + id + " [" + severity + "] -- updated");
} else {
await notion.pages.create({
parent: { database_id: databaseId },
properties: { ...props, Status: { select: { name: "Open" } } },
});
console.log(" + " + serverName + " (" + toolName + "): " + id + " [" + severity + "] -- created");
}
}
}
}
main().catch(console.error);
The idempotency check is the part I care about most. Run the scan on Monday, mark three findings as "Fixed" in Notion, run it again on Friday. The Friday run doesn't reset your status flags or create duplicate rows. It updates the scan date on existing findings and creates new ones if any appeared.
You can also skip the script entirely and use Notion MCP directly from Claude. Add this to your .mcp.json:
{
"mcpServers": {
"notion": {
"command": "npx",
"args": ["-y", "@notionhq/notion-mcp-server"],
"env": {
"OPENAPI_MCP_HEADERS": "{\"Authorization\": \"Bearer YOUR_NOTION_TOKEN\"}"
}
}
}
}
Then ask Claude: "Run mcp-scan with --json, then use Notion MCP to create a database with those findings. Each row needs server name, severity, finding description, which AI tool is affected, and the fix recommendation."
Claude handles both tools in one shot.
How I Used Notion MCP
The Notion MCP server gives you tools to create pages, query databases, and update properties. That's exactly what this workflow needs.
The three things it actually unlocks here:
Persistence. mcp-scan output is ephemeral. You run it, you see it, you close the terminal. Notion MCP writes it to a database that doesn't go away. Each finding has typed properties (severity as a select with color, dates as date fields, paths as text) so you can filter and sort without any extra work.
Idempotency. Before creating a new page, the script queries Notion for any existing entry with the same server name, finding ID, and AI tool. If it exists, it updates instead of creating. This means you can run the scan daily without polluting the database.
Remediation tracking. New findings start as "Open." Your team moves them to "In Progress" or "Fixed" as you work through them. The re-scan doesn't touch the status field on existing entries, so your progress doesn't get wiped.
That last point is why I wanted Notion specifically rather than just a CSV. The database is a working backlog, not a report you run once and forget.
The setup takes about 10 minutes: create a Notion integration, create the database with the right properties, share it with the integration, set two env vars, done.
Try it:
# Scan your MCP servers
npx mcp-scan@latest
# Push findings to Notion
npx mcp-scan@latest --json | NOTION_API_KEY=xxx NOTION_DATABASE_ID=yyy node notion-integration/push-to-notion.js
Full setup instructions in the notion-integration/README.md.
Top comments (1)
MCP security auditing is becoming critical as more teams put MCP servers in front of sensitive systems. The attack surface is real — a misconfigured MCP server can give an LLM broader access than intended.
The MCP scan + Notion combination is smart for keeping findings human-accessible. One thing worth adding to the checklist: scope validation on the server side, not just the tool definitions. A well-described tool that doesn't enforce its own boundaries is still a risk.