Most AI agent demos follow the same pattern: pre-loaded dataset, clever prompt, nice answer. The infrastructure is invisible and static.
I wanted to show something different — an agent that operates the data infrastructure itself. Creates pipelines, ingests live data, manages the lifecycle, then answers questions against what it just built. No pre-loaded anything.
This is a working example on top of Datris, an agent-native data platform I've been building (with Claude helping, full transparency). The platform exposes data pipeline operations as MCP tools. The agent discovers what's available and figures out the rest.
Video walkthrough: https://datris.ai/videos/market-intelligence-agent-mcp
What It Does
The Market Intelligence Agent is a FastAPI + vanilla JS app (no Node) that connects to the Datris MCP server and handles a full financial data workflow end-to-end — live, in the browser.
Two example queries that show the pattern:
"What's the current macro picture?"
The agent doesn't query a database. It:
- Discovers available MCP tools via
tools/list - Decides it needs FRED macro data (yields, VIX, CPI, credit spreads) + equity OHLCV
- Creates pipelines for each source
- Ingests live data from the public APIs
- Queries the results and returns a cited answer
"Is crypto confirming the risk-on trade in equities?"
Cross-asset reasoning across BTC/ETH/SOL (CoinGecko) vs SPY/QQQ/TLT/GLD/XLE/IWM — pipelines created and ingested on the fly, same pattern.
All data sources are free: FRED, yfinance, CoinGecko, SEC EDGAR (10-K/10-Q).
The Architecture
Browser (vanilla JS)
│
├── Chat panel → FastAPI → Anthropic API
│ │
│ MCP Client → Datris MCP Server
│ │
│ Pipeline Lifecycle
│ (create, ingest, query)
│
└── Pipeline status panel ← SSE stream (live tiles + activity feed)
The agent connects to the Datris MCP server and calls tools/list at startup to discover what operations are available — it's not hardcoded to any specific tool names. Background refresh keeps active pipelines current via SSE.
Why MCP for This
The interesting design choice here is using MCP as the control plane between the agent and the data platform rather than building a custom API layer.
The agent doesn't need to know ahead of time what pipelines exist or what data sources are configured. It discovers capabilities at runtime, decides what it needs, and operates accordingly. Add a new tool to the MCP server and the agent picks it up automatically.
This is the pattern I think is underexplored: agents that manage infrastructure, not just agents that query infrastructure someone else built.
Run It Yourself
# Clone the platform
git clone https://github.com/datris/datris-platform-oss
Example code: https://github.com/datris/datris-platform-oss/tree/main/examples/market-macro-agent
Platform (AGPL, open core): https://github.com/datris/datris-platform-oss
Docs: https://docs.datris.ai
Site: https://datris.ai
Happy to answer questions on the MCP server design, SSE streaming approach, or pipeline architecture in the comments.
Top comments (0)