DEV Community

Anmol Sharma
Anmol Sharma

Posted on

I built ORAG - an organizational RAG and MCP platform in TypeScript

I spent the last couple of days building something I kept wishing existed

A platform that takes your organization's internal data such as docs, wikis, databases and makes it actually usable by AI agents.

The result is ORAG, an organizational RAG and MCP server platform built entirely in TypeScript.

Live at: https://orag.theanmolsharma.com/

Here's what I built, why, and the technical decisions that mattered.

The problem

Every team trying to build AI features on internal data hits the same wall.

The LLM doesn't know what your data means. It doesn't know who owns it, whether it's trustworthy, or whether a given agent should even have access to it. You end up with AI that gives confident, wrong answers, which is worse than no answer at all.

This is a context problem. Not a model problem. The model is fine. The context layer is missing.


What ORAG does

ORAG solves this in three layers:

1. RAG pipeline

Connect Notion, Confluence, S3, GitHub, or any custom source. ORAG handles chunking, embedding, and vector retrieval with a retrieval latency target of under 50ms.

2. MCP server

The retrieval layer is exposed as a Model Context Protocol server. One config file gives any AI agent structured, permissioned access to your org's knowledge base.

3. Access control

Role-based permissions across every knowledge base and MCP server. Audit logs, team workspaces, and SSO. The stuff that makes enterprise AI actually deployable.


The technical stack

Everything is TypeScript. Here's what each layer uses:

  • LangChain.js for the RAG pipeline: document loading, chunking strategy, embedding models, and vector store integrations
  • MCP protocol for the agent interface: typed, streaming, authenticated
  • pgvector / Pinecone for vector retrieval
  • Role-based access control built in from day one, not bolted on after

Why MCP?

The alternative is writing bespoke glue code for every integration. Every new agent, every new data source: custom connector, custom auth, custom error handling.

MCP gives AI agents a standard interface. One config, and your agent can call your knowledge base like any other typed API, with streaming, auth, and observability included.

This is what makes ORAG composable. You add a source once. Every agent that needs it just points at the MCP server.

The hard part: retrieval quality in production

RAG that works in a notebook is easy. RAG that works in production is not.

The gap is in the details: chunking strategy matters more than people think, retrieval scoring needs to be observable, and latency has to be predictable under load.

I spent more time on the observability layer than anything else: full request tracing across retrievals, tool calls, and completions, with latency breakdowns and retrieval quality scores in one view. Without this, you're flying blind when something degrades.


What I learned

Access control is where enterprise AI actually breaks.

It's not the model. It's not the retrieval. It's "can this agent see this data?"

Getting that right, with proper audit trails and workspace isolation, is what separates a demo from something you'd trust with real company data.

The context layer is the missing infrastructure.

Most AI tooling focuses on the model layer. The harder, less glamorous problem is making sure the model has the right context: trustworthy, governed, and relevant. That's the layer I wanted to build.


Try it

ORAG is live at: https://orag.theanmolsharma.com/

GitHub link in my bio: github.com/Anmol202005/ORAG

If you're building AI systems on top of internal data and want to talk about the retrieval or MCP layer, reach out. Always happy to discuss what works and what doesn't.

Follow me on X [@javanmol] for shorter takes on TypeScript and AI engineering.

Top comments (0)