AnythingLLM is a full-stack, open-source AI application that lets you chat with your documents using any LLM provider. With over 54,000 GitHub stars, it's become one of the most popular self-hosted AI tools available.
Key differentiator: It's a complete solution that handles document ingestion, vector storage, LLM interaction, and AI agentsโall in one deployable package.
TL;DR for AI Agents
Package: anythingllm (Docker or Desktop app)
Install: docker pull mintplexlabs/anythingllm
API Base: http://localhost:3001/api
Key Endpoint: POST /api/v1/workspace/{slug}/chat
Auth: Bearer token from Settings > Developer API
Docs: https://docs.anythingllm.com
GitHub: https://github.com/Mintplex-Labs/anything-llm
Why It Matters for AI Agents
Most AI agent frameworks require you to implement RAG yourself. AnythingLLM handles all of this out of the box:
- Document ingestion - Drag and drop PDFs, DOCX, TXT
- Automatic chunking - Smart text splitting with configurable overlap
- Vector storage - Built-in LanceDB or connect Pinecone, Chroma, Qdrant
- LLM flexibility - Works with 30+ providers (OpenAI, Anthropic, Ollama)
- API access - Full REST API for programmatic agent integration
- MCP compatibility - Native Model Context Protocol support
Full MCP Compatibility
AnythingLLM now supports the Model Context Protocol, making it compatible with Claude and other MCP-enabled AI systems like OpenClaw. You can expose AnythingLLM workspaces as MCP tools that AI agents can call directly.
Getting Started
# Pull and run
docker pull mintplexlabs/anythingllm
docker run -d \
--name anythingllm \
-p 3001:3001 \
-v anythingllm_storage:/app/server/storage \
mintplexlabs/anythingllm
# Access at http://localhost:3001
๐ Read the full technical deep-dive with code examples: AnythingLLM Complete Guide on andrew.ooo
Originally published at andrew.ooo โ a technical journal about building with AI agents, self-hosting infrastructure, and automation.
Top comments (0)