Building an AI agent takes an afternoon. Deploying it to production takes a week.
Whether you're using OpenClaw, a custom LangChain setup, or your own agent code — the infrastructure story is the same. Docker, networking, process management, isolation, scaling. All that work just to get an agent running with an API endpoint.
The Current State of Agent Deployment
Here's what it looks like today:
- Compute — Set up a VPS or Docker container
- Networking — Configure ports, domains, SSL
- Process management — Keep the agent alive, handle crashes, restarts
- Isolation — If you're running multiple agents, keep them from interfering with each other
- Channel integrations — Configure WhatsApp, Telegram, Slack webhooks and tokens
- Scaling — If you need more than one agent, repeat all of the above
That's a full infrastructure project before you've even started configuring the agent itself.
What If That Whole Layer Was an API Call?
That's what we built with GoPilot.
bashcurl -X POST https://api.gopilot.dev/v1/agents \
-H "X-API-Key: gopt_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Research Assistant",
"model": "anthropic/claude-sonnet-4-6",
"llm_keys": [{"provider": "anthropic", "api_key": "sk-ant-YOUR_KEY"}],
"tool_integrations": {
"brave_search": {"enabled": true, "credentials": {"apiKey": "YOUR_BRAVE_KEY"}}
}
}'
That single request creates a running agent inside its own isolated microVM. Under a second. The agent has web search capabilities, its own filesystem, and a chat endpoint you can call immediately.
We're launching with OpenClaw as the first supported runtime — which means you also get access to 12+ messaging channels (WhatsApp, Telegram, Slack, Discord, Signal, etc.) out of the box. More runtimes are on the roadmap.
Why MicroVMs Instead of Containers?
This is specifically for AI agents — software that generates and executes code, calls external APIs, and often runs in multi-tenant environments.
Docker containers share a kernel with the host. An agent that executes LLM-generated code could potentially exploit a container escape to access other tenants or the host system. Container escapes are found regularly.
MicroVMs run their own kernel. The isolation boundary is at the hardware virtualization level. The attack surface is dramatically smaller. For a platform that runs other people's agent code, that's not optional.
What the API Gives You
Agent lifecycle:
bashPOST /v1/agents # Create (starts immediately)
GET /v1/agents/:id # Status
POST /v1/agents/:id/start # Start stopped agent
POST /v1/agents/:id/stop # Stop
POST /v1/agents/:id/restart # Restart
DELETE /v1/agents/:id # Delete agent + VM + data
Configuration (workspace files):
bash# Push markdown files that define agent behavior
PATCH /v1/agents/:id/workspace-files
# Body: {"files": {"SOUL.md": "You are a helpful...", "IDENTITY.md": "Name: ..."}}
Workspace files are markdown files that configure different aspects of the agent: personality (SOUL.md), identity (IDENTITY.md), available tools (TOOLS.md), memory (MEMORY.md), and more. Think of them as structured system prompts.
Chat:
bashPOST /v1/agents/:id/chat
# Body: {"message": "What are the latest trends in..."}
Files:
bash
POST /v1/agents/:id/files/upload # Upload files to VM
GET /v1/agents/:id/files # List files
GET /v1/agents/:id/files/read # Read a file
Tool integrations:
bash
PATCH /v1/agents/:id/tool-integrations
# Body: {"brave_search": {"enabled": true, "credentials": {"apiKey": "..."}}}
PATCH /v1/agents/:id/channels
# Body: {"slack": {"enabled": true, "bot_token": "xoxb-..."}}
The Speed Question
VM-based solutions typically take 20-30 seconds to become ready. We've engineered the provisioning layer to get that under one second. The details of how are proprietary, but the result is that creating an agent via API feels like calling a function, not provisioning infrastructure.
You can verify this yourself — the response time on POST /v1/agentsincludes a running VM.
Who This Is For
- Developers building AI-powered products who want agents accessible via API without managing infrastructure
- OpenClaw users who love the agent but hate the deployment process
- Startups offering multi-tenant agent platforms where customer isolation is critical
- Anyone who wants to go from agent config to production API in under a minute
Try It
Free tier available.
https://gopilot.dev
Top comments (0)