DEV Community

Cover image for We built a runtime for background AI agents
Miki Makhlevich
Miki Makhlevich

Posted on

We built a runtime for background AI agents

Most LLM frameworks help you talk to agents, but the most useful agents should be treated like services.

They run in the background.
They wake up on a schedule.
They react to webhooks.
They shouldn't wait for a human to manually trigger them.

That gap is why we built langchain-runner, a small runtime for deploying always-on, trigger-driven AI agents using LangChain or LangGraph.

The mental model is simple:
langchain-runner is a daemon for AI agents. You give it an agent, define what wakes it up, and it handles execution and tracking.

Hereโ€™s what that looks like in practice:

from langchain_runner import Runner
from my_agent import agent

runner = Runner(agent)

@runner.cron("0 9 * * *")
async def daily_review():
    return "Review yesterday's activity"

runner.serve()
Enter fullscreen mode Exit fullscreen mode

Thatโ€™s it.

You now have an agent that wakes up every morning and runs in the background. No chat UI, blocking requests or FastAPI glue.

Using webhooks as triggers is just as easy

@runner.webhook("/github")
async def on_github_event(payload: dict):
    return f"Handle GitHub event: {payload['action']}"

Enter fullscreen mode Exit fullscreen mode

This creates a webhook endpoint at /webhook/github

What makes this different from existing tools is the focus on activation, not interaction.

  • LangServe is great when you want an API.
  • LangGraph is great when you want structured workflows.
  • Celery and Airflow are great when you want heavy orchestration.

langchain-runner is for the middle ground: small, autonomous agents that wake up because something happened.

Stop treating every agent like a chatbot.
pip install langchain-runner

๐Ÿ‘‰ GitHub: https://github.com/tadata-org/langchain-runner

๐Ÿ‘‰ PyPI: https://pypi.org/project/langchain-runner/

Top comments (0)