Last week, we watched an agent do something technally correct and completely wrong.
It had access to an MCP server with docs, tickets, code search, and deployment tools. The task sounded simple: “find the bug, patch it, and open a PR.” Instead, the agent pulled half the repo into context, mixed stale ticket history with current code, and started proposing fixes for the wrong service.
Nothing was “broken” in the protocol. The problem was context overload.
That’s the trap with MCP right now: once you connect enough tools, your agent stops suffering from lack of context and starts drowning in it.
The real problem: more tools != better decisions
A lot of MCP setups grow like this:
- add GitHub tools
- add docs search
- add tickets
- add Slack
- add logs
- add deployment APIs
At first it feels powerful. Then the agent starts doing what all overloaded systems do: grabbing too much, ranking poorly, and stitching together irrelevant facts.
The failure mode isn’t just token cost. It’s bad action selection.
If your agent can’t tell:
- which repo relates to which service
- which ticket is current vs resolved
- which API belongs to which environment
- which human approved what
- which tool output should be trusted
…then “just give it more context” becomes a reliability bug.
Why a knowledge graph helps
The fix isn’t “stuff less data into prompts.”
The fix is to give the agent structure before context.
A knowledge graph lets you model relationships explicitly:
Service -> owned_by -> TeamPR -> fixes -> TicketRunbook -> applies_to -> ServiceAgent -> approved_for -> ActionMCP Tool -> exposes -> ResourceResource -> environment -> Production
So instead of asking the agent to infer relationships from giant blobs of text, you let it query the graph first and only pull the relevant context second.
Think of it like this:
Without graph:
Prompt = docs + tickets + code + logs + hope
With graph:
Query graph -> identify relevant entities -> fetch only connected context
That changes the agent’s job from “understand everything” to “follow the map.”
A simple architecture
Here’s the pattern that works well:
+------------------+
| MCP Servers |
| docs / git / ops |
+--------+---------+
|
v
+---------------------+
| Entity extraction |
| services, tickets, |
| repos, owners, envs |
+----------+----------+
|
v
+---------------------+
| Knowledge Graph |
| nodes + relations |
+----------+----------+
|
graph query first
|
v
+---------------------+
| Agent prompt builder|
| only relevant ctx |
+---------------------+
The key idea: MCP remains your execution layer, but the graph becomes your retrieval and routing layer.
What goes in the graph?
You do not need a perfect enterprise ontology.
Start with the entities your agents already trip over:
- repositories
- services
- APIs
- environments
- tickets
- PRs
- humans/teams
- agents
- tools
- approvals
And a few practical relationships:
depends_onowned_bydeployed_tofixesapproved_bycan_accessrelated_to
That’s enough to cut a lot of noisy retrieval.
Runnable example: build a tiny graph in Node.js
This isn’t a production graph database, but it shows the pattern.
npm install graphology
const Graph = require("graphology");
const graph = new Graph();
graph.addNode("svc:billing", { type: "service" });
graph.addNode("repo:payments-api", { type: "repo" });
graph.addNode("ticket:1234", { type: "ticket" });
graph.addNode("env:prod", { type: "env" });
graph.addEdge("repo:payments-api", "svc:billing", { rel: "implements" });
graph.addEdge("ticket:1234", "svc:billing", { rel: "affects" });
graph.addEdge("svc:billing", "env:prod", { rel: "deployed_to" });
console.log("Neighbors of billing:", graph.neighbors("svc:billing"));
Output:
Neighbors of billing: [ 'repo:payments-api', 'ticket:1234', 'env:prod' ]
That tiny step already gives you a better retrieval strategy:
- identify the target entity (
svc:billing) - pull connected nodes
- fetch MCP context only for those nodes
Instead of asking the agent to search everything, you constrain the blast radius.
Where people get this wrong
A few common mistakes:
1. They build a vector search pipeline and call it solved
Embeddings are useful, but semantic similarity is not the same as operational relevance.
A runbook for “billing retries” might look similar to “payment failures” while still being the wrong system.
2. They skip authorization edges
This one matters a lot for MCP. Your graph shouldn’t just model knowledge. It should model who or what is allowed to act.
If OPA or another policy engine is already working for you, use it. The point is not to replace good authorization systems. The point is to stop leaving access decisions implicit in prompt text.
3. They try to model everything on day one
Don’t. Start with the relationships behind your highest-cost failures.
Usually that means:
- wrong repo
- wrong environment
- wrong ticket
- wrong approver
- wrong tool
Why this matters more as MCP grows
MCP makes tool integration easier, which is great. But easier integration means more context sources, more actions, and more chances for agents to connect the wrong dots.
Knowledge graph architecture gives you a way to scale relevance and control together.
That’s the real win:
- fewer useless tokens
- fewer wrong actions
- better auditability
- clearer authorization boundaries
Not because the agent got “smarter,” but because your system stopped making it guess.
Try it yourself
If you want to test your MCP setup and see what your server is exposing:
- Want to check your MCP server? Try https://tools.authora.dev
- Run
npx @authora/agent-auditto scan your codebase - Add a verified badge to your agent: https://passport.authora.dev
- Check out https://github.com/authora-dev/awesome-agent-security for more resources
If you’re already using a graph or another way to control MCP context, I’d love to hear how you’re doing it.
How are you handling agent context selection today — vector search, hand-written routing, knowledge graphs, or something else? Drop your approach below.
-- Authora team
This post was created with AI assistance.
Top comments (0)