nanobot just crossed 39K stars on GitHub. It's from HKU's Data Science lab and it positions itself as the "ultra-lightweight personal AI agent." I spent a day with it to figure out if that claim holds up and when you should use it over Claude Code, Goose, or pi-mono.
What nanobot Actually Is
It's a coding agent — like Claude Code or Cursor's agent mode — but built with 99% fewer lines of code. The entire agent core fits in a few hundred lines of Python. It connects to any LLM provider, reads your codebase, runs terminal commands, and edits files.
The philosophy: strip everything except what matters. No plugin system, no marketplace, no GUI. Just an agent that reads, thinks, and acts.
Install (30 seconds)
# Fastest
uv tool install nanobot-ai
# Or pip
pip install nanobot-ai
# Or from source (latest)
git clone https://github.com/HKUDS/nanobot.git
cd nanobot && pip install -e .
Run it:
nanobot
That's it. No config files. No API key setup wizard. It prompts you for your key on first run.
When to Use nanobot vs Full-Featured Agents
Use nanobot when:
- You want something that starts in <1 second (Claude Code takes 3-5s)
- You want to read and modify the agent's source code (it's small enough to understand in an afternoon)
- You're doing research on AI agents and need a clean base to experiment with
- You want a minimal agent on a low-resource machine (CI server, Raspberry Pi, cheap VPS)
- You don't need MCP servers, skills, or hooks
Use a full-featured agent (Claude Code, Cursor, Goose) when:
- You need the MCP ecosystem (5,600+ servers on Protodex)
- You need hooks, plugins, or automated workflows
- You want deep model integration with context management
- You're working on large codebases where tooling matters
Use Goose when:
- You want extensibility with a plugin system
- You need browser automation built in
- You want community extensions
How nanobot Works Under the Hood
The architecture is dead simple:
User prompt → LLM call → Tool use (file read/write, shell) → LLM response → Loop
No routing, no planners, no multi-agent orchestration. One loop. The LLM decides what to do, does it, reports back. This is why it's fast — there's nothing between you and the model except the tool execution layer.
Limitations (Be Honest)
- No MCP support — can't connect to databases, APIs, or browsers through MCP. If you need that, check Protodex for servers that work with any MCP-compatible editor.
- No memory across sessions — each run starts fresh. No persistent context.
- Small community — compared to full-featured agents' ecosystems, there are fewer templates, guides, and pre-built workflows.
- Research project — it's from a university lab. Updates come when researchers have time, not on a product roadmap.
The Verdict
nanobot is the right tool if you value simplicity and transparency over features. You can read every line of its code in a sitting. That makes it perfect for learning how AI agents work, for building custom agents on top of, or for environments where you need something tiny and fast.
If you need the ecosystem, use a full-featured MCP-compatible agent (Claude Code, Cursor, or Goose) with MCP servers. If you want to understand what's happening under the hood, nanobot is the best teacher.
Building with AI agents? Browse 5,618 MCP servers with one-click install for Claude Desktop, Cursor, Goose, and more at protodex.io
Top comments (0)