Hey devs — if you're building with AI agents, LLMs, or tools that talk to other tools, this one's for you.
🧩 What we just built:
I put together a public repo that wraps a Model Context Protocol (MCP) server with something a bit... smarter.
It’s called MindsEye — and it turns a standard MCP tool server into an observing, logging, reasoning, and eventually self-adapting platform.
Here’s the repo:
👉 https://github.com/PEACEBINFLOW/mindseye-mcp-server
🚀 What this repo actually does
This starter gives you a full backend for MCP agents — but with cognitive instrumentation baked in. That means:
✅ Every tool is auto-wrapped with tracing + span logic
✅ Errors are caught and logged with Sentry (if you want)
✅ Cognitive paths and params are tracked in memory
✅ There's a built-in feedback loop system for agents to learn/adapt
✅ Works with both Cloudflare Workers AND stdio mode (local)
Basically:
If an agent calls a tool and it fails, MindsEye doesn’t just log it — it thinks about it.
🛠 What's inside the repo?
mindsEye/ – memory store, trace system, feedback loop, analyzer
tools/ – where your agent tools live (sample: echo, divide)
transports/ – Cloudflare + stdio support (pick your runtime)
Auto-wrapping registerTool() system to give your tools context-aware execution
You don’t have to think about spans or logs — MindsEye’s doing that for you.
🧠 What MindsEye is supposed to do
This repo is step 1 of a bigger system I’m building:
Let agents reflect on their behavior
Create a reputation/memory model per agent
Adapt how tools respond based on past behavior
Open the door to true cognitive loops inside agent backends
We’re not there yet, but the structure is here. MindsEye is set up to be your agent’s executive function:
memory, context awareness, learning, and system-level feedback.
🔍 Why this matters
Right now, most AI tools just… run.
No introspection. No context. No trace. No accountability.
If something breaks? You dig through logs and guess.
With this setup:
You get full cognitive flow tracing
You can surface why a tool call failed
You can replay behavior with full parameter context
You can start building agent-aware observability (beyond "console.log")
🧪 What you can do with the repo
Use it as your base MCP server for any agent system
Plug in new tools using registerTool() (auto-instrumented)
Run in local stdio mode or deploy on Cloudflare Workers
Extend the MindsEye core to:
Store historical memory
Analyze failures
Suggest retries
Add evals or self-healing
📦 Try it locally:
git clone https://github.com/PEACEBINFLOW/mindseye-mcp-server
cd mindseye-mcp-server
npm install
npm run dev
Then send messages like this to the MCP:
{
"tool": "divide",
"args": { "a": 10, "b": 0 }
}
MindsEye will:
Catch the failure
Trace the params
Store the event
Return an error with an optional trace ID
🤖 Where this is going
This isn’t just an MCP server — it’s a sandbox for:
Agent cognition
Adaptive tool calls
Semantic memory
Agent reputation scoring
Traceable agent workflows
Real feedback loops between perception → action → memory
If you’re building:
Multi-agent systems
OpenAgents-style architectures
Self-aware workflows
LLM toolchains with dynamic routing
…you should clone this repo, plug it in, and start building on top.
💬 Let’s build this together
Check out the code, fork it, drop an issue, add tools.
Let’s push what agents can do when they remember what just happened.
Top comments (0)