A few weeks ago I wrote about hardening a Notion MCP workflow with seven small utility MCP servers. The pattern works just as well for the Algolia MCP Server. The risk surface is different in detail, identical in shape.
The risk shape, restated
Whenever you give an LLM read or write access to a system of record through MCP, three things become possible:
- Outbound leakage: the record contains a secret, a key, or PII that you did not realize was there, and now it travels to whatever model the user pointed at the MCP.
- Prompt injection through data: a record contains instructions for the model. The model follows them.
- Unbounded spend: the model reads thousands of records before you notice, racking up tokens against your provider and operations against the source system.
Algolia indexes are particularly prone to (1) because they ingest from primary stores via webhooks, and "send the whole user record to the index" is a common shortcut. Indexes are particularly prone to (2) because end-user-generated content (reviews, comments, descriptions) is exactly what teams put into them. And (3) is generic.
The wrap
{
"mcpServers": {
"algolia": {
"command": "npx",
"args": ["-y", "@algolia/mcp-node"],
"env": {
"ALGOLIA_APPLICATION_ID": "...",
"ALGOLIA_API_KEY": "..."
}
},
"secretsniff": {
"command": "npx",
"args": ["-y", "@mukundakatta/secretsniff-mcp"]
},
"pii-sentry": {
"command": "npx",
"args": ["-y", "@mukundakatta/pii-sentry-mcp"]
},
"maskprompt": {
"command": "npx",
"args": ["-y", "@mukundakatta/maskprompt-mcp"]
},
"prompt-injection-shield": {
"command": "npx",
"args": ["-y", "@mukundakatta/prompt-injection-shield-mcp"]
},
"textsanity": {
"command": "npx",
"args": ["-y", "@mukundakatta/textsanity-mcp"]
},
"llm-output-sanitizer": {
"command": "npx",
"args": ["-y", "@mukundakatta/llm-output-sanitizer-mcp"]
},
"agentbudget": {
"command": "npx",
"args": ["-y", "@mukundakatta/agentbudget-mcp"],
"env": {
"AGENTBUDGET_MAX_TOKENS_PER_SESSION": "150000",
"AGENTBUDGET_MAX_DOLLARS_PER_SESSION": "5",
"AGENTBUDGET_MAX_TOOL_CALLS": "200"
}
}
}
}
Drop into ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or the platform equivalent. Restart Claude Desktop.
What changes for Algolia specifically
The Algolia MCP server exposes index operations, not just search. That makes the write path more important than in the Notion case. Three additions to the standard hardening recipe:
1. Run llm-output-sanitizer on every save
Before calling algolia.save_objects, run llm-output-sanitizer over the
objects you are about to save. If it reports any HTML, SQL, or shell-style
content in fields that should be plain text, refuse the save and tell me
which field tripped.
This catches "the model is writing a stored XSS payload into a product description" by accident. Algolia's record format doesn't enforce content rules. Your sanitizer does.
2. Cap tool calls per session, not just tokens
agentbudget's AGENTBUDGET_MAX_TOOL_CALLS is the one I had to set highest for Algolia because the model loves to follow up a search with twenty getObject calls. 200 is a reasonable per-session cap for exploration; bring it down to 20 for any automation that runs unattended.
3. Gate delete_index and delete_object with a confirmation hook
NEVER call algolia.delete_index without first echoing the index name back to
me in plain text and waiting for me to reply "confirm delete <indexname>".
The same rule applies to delete_object with batches of 10 or more.
This is a system-prompt-level guardrail, not a tool. Phrase it imperatively. Claude obeys reliably. The same pattern works on Cursor with their system-prompt customization.
What Algolia MCP does NOT need from the stack
Some of the Notion-side servers are less relevant for Algolia:
- textsanity: Algolia records come from your own ingestion pipeline, not user paste. Unicode noise is rare. Keep textsanity in the config but don't bother prompting Claude to run it on every read.
- prompt-injection-shield: Necessary if your index includes user-generated content (reviews, search-suggest data scraped from logs). Skip if your records are only product catalog from a trusted ingest path.
The rest stay. They cost a few hundred milliseconds of stdio overhead per filtered call, and they make the Algolia MCP safe enough to run in Claude Desktop against a production index without losing sleep.
Why "seven small servers" beats "one Algolia super-server"
Same argument as in the Notion piece. A single bundled "Algolia AI Guard" would couple your release cycle to upstream Algolia, hide which check fired when a chain failed, and lose stdio-level isolation. Seven small servers, each under 300 LOC of TypeScript, are independently versioned, independently auditable, and independently disable-able.
If you only have time to add ONE of them in front of your Algolia MCP today, add secretsniff. The risk of a leaked Stripe key in a Notion page is real but bounded; the risk of a leaked test-environment Algolia admin key (which exposes write to ALL your indexes) is unbounded.
Reproduce
# Prereqs
npm install -g @algolia/mcp-node
# Wire the config above into Claude Desktop or Cursor, restart, done.
Every utility server above ships on npm under @mukundakatta, MIT-licensed, CI-green on Linux and macOS. The Algolia MCP server is at @algolia/mcp-node.
Related work
- The Notion MCP version of this recipe.
-
mcpchecklints your full Claude Desktop / Cursor / Cline / Windsurf / Zed MCP config in CI. -
mcp-pulsewatches a fleet of MCP servers and reports health. - The full list of 47 MCP-related repos: github.com/MukundaKatta?tab=repositories&q=mcp.
If your Algolia MCP config catches something interesting in the wild after you add these, comment with what your sanitizer flagged. The boring stuff (PII, stray keys) is exactly the boring stuff worth knowing about.
Top comments (0)