Last week, a coding agent on a shared repo did something weirdly familiar: it opened the right files, read the right docs, and still made the wrong change.
Not because the model was bad.
Not because the prompt was weak.
Because it had documents, but not context.
That’s the gap a lot of “AI workspace” features still miss. They’re good at bundling files, notes, and chats into a place the model can search. But when your agent needs to answer questions like:
- Which service owns this endpoint?
- What policy applies to this tool call?
- Which secrets are allowed in staging but not prod?
- Who delegated permission to this agent?
- What changed since the last sprint?
…a folder full of text chunks stops being enough.
You don’t just need retrieval. You need relationships.
The actual problem: context is not a pile of files
A lot of current AI tooling treats context like this:
context = docs + code + chat history + search results
That works for “summarize this file” or “find where this function is used.”
It breaks down when context is structural.
In real systems, meaning lives in edges:
- service depends on database
- agent acts on behalf of user
- tool requires approval
- API key belongs to environment
- PR implements ticket
- policy applies to action
A Copilot-style space can collect the nouns. A knowledge graph helps the agent reason over the verbs.
What a knowledge graph gives an agent
A knowledge graph isn’t magic. It’s just a way to store entities and relationships so context becomes queryable instead of fuzzy.
Here’s the difference:
Files:
- payments.md
- auth.md
- staging.env
- sprint-24-notes.md
Knowledge graph:
[Agent A] --delegated_by--> [User B]
[Agent A] --allowed_to_use--> [Tool: deploy-staging]
[deploy-staging] --requires--> [Approval: ops]
[Service: payments-api] --depends_on--> [DB: ledger]
[PR-1842] --implements--> [Ticket: BILL-932]
Now the agent can answer:
- “Can I run this tool?”
- “What service will this migration affect?”
- “Which approval path applies here?”
- “What changed that might explain this failure?”
That’s much closer to how senior engineers actually reason.
A simple mental model
Think of it like this:
+-------------------+
| Docs / Code |
| Notes / Chats |
+---------+---------+
|
extract
v
+---------+ relates_to +---------+ requires +---------+
| Agent |--------------->| Tool |------------->| Approval|
+---------+ +---------+ +---------+
| owns |
| | affects
v v
+---------+ depends_on +---------+
| Service |--------------->| Database|
+---------+ +---------+
Search finds text.
Graphs preserve meaning.
You usually want both.
A tiny example with Neo4j
If you want to feel the difference, here’s a minimal runnable example with Neo4j.
npm install neo4j-driver
const neo4j = require("neo4j-driver");
const driver = neo4j.driver("bolt://localhost:7687", neo4j.auth.basic("neo4j", "password"));
async function run() {
const session = driver.session();
await session.run(`
MERGE (a:Agent {name: "release-bot"})
MERGE (t:Tool {name: "deploy-staging"})
MERGE (ap:Approval {name: "ops-approval"})
MERGE (a)-[:ALLOWED_TO_USE]->(t)
MERGE (t)-[:REQUIRES]->(ap)
`);
const result = await session.run(`
MATCH (a:Agent {name: "release-bot"})-[:ALLOWED_TO_USE]->(t)-[:REQUIRES]->(ap)
RETURN a.name AS agent, t.name AS tool, ap.name AS approval
`);
console.log(result.records[0].toObject());
await session.close();
await driver.close();
}
run();
Output:
{ agent: 'release-bot', tool: 'deploy-staging', approval: 'ops-approval' }
That’s obviously tiny, but the pattern scales:
- ingest code metadata
- ingest docs and ownership data
- ingest identity and policy relationships
- query the graph before the agent acts
If your need is mostly authorization, a policy engine like OPA may be the right primary tool. But if your agent also needs to understand ownership, dependencies, delegation, and task history together, a graph becomes incredibly useful.
Where this matters most
I’ve seen this show up in four places:
1. Tool use
Agents need more than “here are 20 tools.” They need to know which tools are safe, who approved access, and what each action touches.
2. Shared codebases
When multiple agents work in parallel, context isn’t just code. It’s locks, sprint boundaries, ownership, and what another agent already changed.
3. Identity and delegation
“Why was this agent allowed to do that?” is a graph question. User → delegation chain → role → tool → action.
4. Security investigations
When something goes wrong, you want connected evidence, not scattered logs.
The practical takeaway
If your current setup is “RAG over docs plus a long system prompt,” you’re not doing it wrong.
You’re just handling one kind of context.
The missing layer is a model of relationships your agent can query:
- who
- can do what
- to which resource
- under which policy
- with whose approval
- based on what prior state
That’s what knowledge graphs are good at.
Not as a replacement for search. As the thing that stops search from being your only hammer.
Try it yourself
If you’re working on agent security, identity, or MCP tooling, these free tools are useful:
- Want to check your MCP server? Try https://tools.authora.dev
- Run
npx @authora/agent-auditto scan your codebase - Add a verified badge to your agent: https://passport.authora.dev
- Check out https://github.com/authora-dev/awesome-agent-security for more resources
If you’re already building agent context layers, I’d love to know: are you still using plain retrieval, or have you started modeling relationships too?
-- Authora team
This post was created with AI assistance.
Top comments (0)