We Shipped Per-Tool Toggle Controls for Our MCP Server — Here's Why It Matters More Than It Sounds
If you've been building AI agent integrations with Model Context Protocol, you've probably hit this wall: you connect an LLM to a data source via an MCP server, and suddenly the agent has access to everything that server exposes. Every tool. Every endpoint. Every table.
That's fine in a sandbox. It's a security incident waiting to happen in production.
DreamFactory 7.4.3 (shipping alongside df-mcp-server v1.2.0 and df-admin-interface v1.7.1) adds per-tool toggle controls directly in the MCP Server configuration page. It's a small UI change with a non-trivial impact on how you govern AI agent access to enterprise data.
Let me walk through the problem we were solving and what the implementation actually looks like.
The Real Problem With MCP in Enterprise Environments
MCP is genuinely useful. It gives LLMs a standardized way to call tools — query a database, read a file, hit an API — without you having to bake those integrations into the model itself. The protocol is clean. The ecosystem is moving fast.
But here's the friction point we kept running into with enterprise teams:
DreamFactory's MCP server auto-generates tools from your connected services. Connect a SQL Server database and your MCP server exposes tools for every table it discovers. Connect a REST service and you get tools for every endpoint. That's the whole value proposition — instant, schema-driven tool generation without writing boilerplate.
The problem is that "all or nothing" at the server level is too coarse for real governance. Your enterprise has:
- Tables that contain PII that shouldn't be reachable from an LLM context window
- Write endpoints that an agent absolutely should not be calling autonomously
- Internal admin APIs that have no business being exposed to an AI assistant
- Staging vs. production services where you want different tool availability
Before 7.4.3, your options were: expose everything the server sees, or create entirely separate MCP server instances scoped to different subsets of tools. That second option works, but it's operational overhead — you're managing multiple server configs, multiple connection strings, multiple auth contexts just to enforce "don't let the agent call DELETE /users."
What We Shipped
The change is in three coordinated releases:
- dreamfactory v7.4.3 — core platform update wiring everything together
-
df-mcp-server v1.2.0 — adds the toggle state to the tool registry; disabled tools are excluded from the
tools/listresponse the LLM receives - df-admin-interface v1.7.1 — surfaces the per-tool enable/disable controls in the MCP Server configuration page
The UX is straightforward: navigate to your MCP Server config, and each discovered tool now has a toggle. Flip it off, and that tool disappears from the MCP server's advertised capability list. The LLM doesn't see it. It can't call it. It doesn't know it exists.
This is the right level of control for a few reasons.
Why Tool-Level Granularity Is the Right Abstraction
When an LLM queries your MCP server via the tools/list method, it gets back a manifest of what it can do. The model uses this to reason about available actions. If a tool isn't in the manifest, the model won't attempt to invoke it — there's nothing to attempt.
Disabling at the tool level (rather than, say, at the API key level or the role level) means you're controlling the capability surface the model sees, not just whether requests get rejected after the fact. Rejection after the fact still means the model tried. Tool exclusion means the model never tries.
For security teams, that distinction matters. "The agent can't do X because we revoke it at auth time" is weaker than "the agent doesn't know X is possible."
Here's roughly what the tool manifest looks like for a connected database service with some tools disabled:
{
"tools": [
{
"name": "df_sqlserver_customers_list",
"description": "List records from the customers table",
"inputSchema": { ... }
},
{
"name": "df_sqlserver_orders_get",
"description": "Retrieve a specific order by ID",
"inputSchema": { ... }
}
]
}
Tools for users, employee_records, and any write operations? Toggled off in the admin interface. They don't appear here. The agent reasoning about what to do next starts from this manifest — it'll never reason its way to calling a tool that isn't listed.
Compare that to a world where the tools are visible but blocked by RBAC at call time:
{
"tools": [
{ "name": "df_sqlserver_users_delete", ... },
{ "name": "df_sqlserver_employee_records_list", ... },
...
]
}
The model sees these. It might attempt them. Your RBAC blocks the call. But you've now introduced model behavior you have to reason about — why is the agent trying to call users_delete? What prompt injection or reasoning path got it there? Reducing the attack surface at the manifest level is cleaner.
Practical Governance Scenarios This Unlocks
Read-only AI assistants against production data
Enable SELECT-equivalent tools, disable everything that writes. Your agent can answer questions about your data without any risk of mutation. This was technically possible before by scoping the DreamFactory API role — but now you have an additional enforcement layer at the MCP tool manifest level, and you can manage it without touching the underlying API role config.
Staged rollouts of AI capabilities
You want to give your internal copilot access to the orders service but you're not ready to expose the inventory service yet. Toggle off the inventory tools. No new server instance, no new credentials, no config duplication. When you're ready, flip the toggle.
Different tool surfaces per agent
If you're running multiple agents (a customer-facing assistant, an internal ops agent, a data analyst agent), each with different access requirements, you can maintain separate MCP server instances in DreamFactory — each with its own tool toggle configuration — without duplicating the underlying service connections. The connections are shared; the exposed capability surface is per-instance.
Incident response
A tool is behaving unexpectedly in production. You want to pull it immediately without redeploying or modifying role configs. Toggle it off. The next tools/list call returns without it. You've bought yourself time to investigate without a full incident response cycle.
What This Looks Like in the Admin Interface
The configuration is in Services → [Your MCP Server] → Config. Once you're on the MCP Server configuration page, you'll see the list of discovered tools with toggle controls next to each one.
There's no CLI flag for this yet — it's admin UI driven. If you're managing DreamFactory config as code (which you should be doing via the platform's exportable config), the tool toggle state is included in the service configuration export, so you can version-control your tool surface definition alongside your other infrastructure config.
Honest Assessment: What This Doesn't Solve
Per-tool toggles are a blunt instrument compared to full attribute-based access control. You're making a binary decision per tool — enabled or disabled globally for that MCP server instance. You can't say "this tool is available to agents running in the context of user group A but not group B" at the MCP layer. For that kind of context-aware control, you're still relying on DreamFactory's underlying RBAC and field-level security, which feeds into what the tool can return, not whether the tool is advertised.
Also worth noting: this is a server-side control. If you're running df-mcp-server and an agent has already cached the tools/list response, there may be a window between toggling off a tool and the agent's next refresh. For most use cases this is fine. For high-sensitivity scenarios, you probably want to pair this with the underlying API role restrictions as a belt-and-suspenders approach.
Written with Pressroom HQ
Top comments (0)