I run a Telegram AI bot where every user gets their own Docker container. Here is how that works on a single cheap GCP instance without Kubernetes.
Why Per-User Containers?
My AI companion bot gives each user a persistent agent with its own memory, personality file, and schedule. Users should never see each other's data, and each agent needs an isolated filesystem.
Options considered:
- Shared process, per-user directories: Cheapest but one crash kills everyone
- Kubernetes: Overkill for 10-50 users on one machine
- Docker containers, managed by gateway: Just right
The Architecture
Gateway container (always running)
|
+-- User A container (started on demand)
+-- User B container (started on demand)
+-- User C container (idle, stopped after 60min)
The gateway manages container lifecycle:
-
Start on demand: When a message arrives for user X, check if container exists. If not,
docker create+docker start. - Idle cleanup: Every 5 minutes, check which containers have not received a message recently. Stop idle ones.
- Stopped containers use zero CPU/RAM: Docker keeps the filesystem, so state persists.
Key Code
async function ensureContainer(userId: string): Promise<string> {
const name = `adola-user-${userId.slice(0, 8)}`;
try {
const info = await docker.inspect(name);
if (!info.State.Running) {
await docker.start(name);
}
return name;
} catch {
await docker.create({
name,
Image: "adola-agent:latest",
HostConfig: {
Binds: [`/data/users/${userId}/workspace:/workspace`],
NetworkMode: "adola-net"
}
});
await docker.start(name);
return name;
}
}
Memory Usage
On my e2-medium (4GB RAM):
- Gateway + Postgres + Caddy: ~400MB baseline
- Each active user container: ~150-200MB
- Stopped containers: 0MB
With 8 users, peak usage during heartbeat cycle (all containers briefly active): ~2GB. Comfortable headroom.
The Gateway Manages Docker via Socket
The gateway container mounts /var/run/docker.sock so it can create/start/stop sibling containers. This is the "Docker out of Docker" pattern.
gateway:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /data:/data
Networking
All containers join the same Docker bridge network (adola-net). The gateway calls user containers by name:
http://adola-user-abc12345:18789/v1/chat/completions
Docker DNS handles resolution. No port mapping needed.
What I Would Change
For 100+ users, I would add:
- Container pooling (pre-warm a few idle containers)
- Horizontal scaling (multiple gateway instances with consistent hashing)
- Prometheus metrics on container lifecycle
But for the 8-50 user range, this simple approach works perfectly.
Try the Bot
The system is live: t.me/adola2048_bot - each user gets their own isolated AI agent container on Telegram.
Top comments (1)
45 EUR - Hetzner - you get 12 CPU, 64 GB RAM, plenty for running 500 containers