DEV Community

Cover image for Why I Built Selectools (and What I Learned Along the Way)
John Nichev
John Nichev

Posted on

Why I Built Selectools (and What I Learned Along the Way)

Every AI agent framework makes the same promise: "connect your LLM to tools and go." Then you start building.

You discover that LangChain needs 5 packages to do what should take 1. That LCEL's | operator hides a Runnable protocol that breaks your debugger. That LangSmith costs money to see what your own code is doing. That when your agent graph pauses for human input, LangGraph restarts the entire node from scratch.

I hit every one of these at work. We were building AI agents for real users, not demos, not prototypes, but production systems handling actual customer requests. The existing frameworks weren't built for this.

So I built selectools.

What I actually needed

  1. Tool calling that just works. Define a function, the LLM calls it. No adapter layers, no schema gymnastics. Works the same across OpenAI, Anthropic, Gemini, and Ollama.

  2. Traces without a SaaS. Every run() should tell me exactly what happened, which tools were called, why, how long each step took, what it cost. Not "sign up for our platform to see your own logs."

  3. Guardrails that ship with the agent. PII detection, injection defense, topic blocking, configured once, enforced everywhere. Not a separate package to evaluate.

  4. Multi-agent orchestration in plain Python. When I need 3 agents to collaborate, I want Python routing functions. Not a state graph DSL, not a compile step, not Pregel channels.

  5. One command to deploy. selectools serve agent.yaml gives me HTTP endpoints, SSE streaming, and a chat playground. Not "install FastAPI, create an app, add routes, configure CORS, handle SSE..."

What selectools looks like today

A 3-agent pipeline is one line:

result = AgentGraph.chain(planner, writer, reviewer).run("Write a blog post")
Enter fullscreen mode Exit fullscreen mode

A composable pipeline uses | on plain functions:

pipeline = summarize | translate | format
result = pipeline.run("Long article text...")
Enter fullscreen mode Exit fullscreen mode

Human-in-the-loop pauses at the yield point and resumes there:

async def review(state):
    analysis = await expensive_work(state)  # runs once, not twice
    decision = yield InterruptRequest(prompt="Approve?")
    state.data["approved"] = decision == "yes"
Enter fullscreen mode Exit fullscreen mode

Deploy with one command:

selectools serve agent.yaml
Enter fullscreen mode Exit fullscreen mode

The numbers

  • 4,612 tests at 95% coverage across Python 3.9-3.13
  • 9 critical security bugs fixed in a pre-launch audit (5-agent parallel bug hunt, 56 total findings)
  • 44 interactive module docs with runnable examples, stability badges, and Copy Markdown buttons
  • 40 real-API evaluations against OpenAI, Anthropic, and Gemini
  • 76 runnable examples
  • 50 built-in evaluators (no paid service needed)
  • 152 model definitions with pricing data
  • Apache-2.0 license

The latest milestone: a visual agent builder

The newest addition is a visual agent builder that runs entirely in your browser. Drag and drop nodes, wire up edges, configure models and tools, then export to YAML or Python. It's deployed on GitHub Pages at https://selectools.dev/builder/ with zero install required. No paid desktop app, no subscription. Just open the URL and start building.

What I'd tell you honestly

selectools is smaller than LangChain. The community is young. If you need 50 integrations and a managed platform today, LangChain is the safer bet.

But if you want a library that stays out of your way, where routing is a Python function, errors are Python tracebacks, and you don't need a paid service to see what your agent did, give it a try.

pip install selectools
Enter fullscreen mode Exit fullscreen mode

Top comments (0)