Managing more than two or three automated scripts in production is where things get messy. You end up with a collection of Python files scattered across servers, credentials hardcoded or stored in plaintext somewhere, no visibility into what's running or failing, and deployments that require SSH access and manual intervention.
I built BotFarm to solve this. It's a self-hosted platform that lets you create, deploy, monitor and maintain containerized Python bots from a web dashboard — no SSH required.
The problem with ad-hoc bot management
The typical evolution of a bot infrastructure goes like this:
You write a script. It works. You add another. Then five more. At some point you have fifteen scripts running on a server, some as cron jobs, some as systemd services, some launched manually. Credentials are in config files with 644 permissions. Logs go to files that nobody reads until something breaks. Deploying a new version means SSHing in, pulling the repo, restarting the service, and hoping nothing else broke.
This is fine for personal projects. It's not fine when other developers need to deploy and manage their own bots, when you need an audit trail, or when credentials need to be kept secure.
BotFarm replaces all of that with a centralized dashboard.
Architecture
Every bot runs in its own Docker container with resource limits (512MB RAM, 0.5 CPUs by default). The dashboard never touches the Docker socket directly — all container operations go through tecnativa/docker-socket-proxy, which exposes only the operations the dashboard actually needs.
Internet
│
▼ :80 / :443
┌──────────────┐
│ Nginx │ TLS · reverse proxy · static assets
└──────┬───────┘
│ :8080 (internal)
▼
┌──────────────────┐ ┌───────────────────────┐
│ Dashboard │─────────▶ Docker Socket Proxy │
│ FastAPI + React │ :2375 │ (allowlist only) │
└──────┬───────────┘ internal└──────────┬────────────┘
│ │ /var/run/docker.sock
│ :3306 (internal) ▼
▼ Docker Engine
┌─────────────┐ │
│ MariaDB │ ┌──────────────┼──────────────┐
└─────────────┘ bot-a bot-b bot-n
[512MB] [512MB] [512MB]
[0.5CPU] [0.5CPU] [0.5CPU]
The Docker Socket Proxy is not optional — it's the architectural decision that makes this safe to run in a multi-developer environment. Without it, a compromised dashboard container has full control over the Docker daemon. With it, the blast radius is contained to the allowlisted operations: CONTAINERS, IMAGES, BUILD, NETWORKS.
Credential management
Storing bot credentials is the part most people get wrong. Plaintext in environment files, base64-encoded strings passed as environment variables, secrets committed to git — all of these are real patterns I've seen in production.
BotFarm encrypts all credentials with AES-256-GCM before storing them in the database. Each encryption operation uses a random IV, so the same plaintext produces a different ciphertext every time. The master key lives in /etc/botfarm.env with 600 permissions, generated at install time and never touched again.
import os
import base64
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
def encrypt(plaintext: str, master_key: bytes) -> str:
iv = os.urandom(12) # 96-bit random IV per encryption
aesgcm = AESGCM(master_key)
ciphertext = aesgcm.encrypt(iv, plaintext.encode(), None)
# Store IV + ciphertext together
return base64.b64encode(iv + ciphertext).decode()
def decrypt(encrypted: str, master_key: bytes) -> str:
data = base64.b64decode(encrypted)
iv, ciphertext = data[:12], data[12:]
aesgcm = AESGCM(master_key)
return aesgcm.decrypt(iv, ciphertext, None).decode()
At runtime, decrypted credentials are injected into each bot container as environment variables — they exist in memory during execution and are never written to disk.
Real-time log streaming via WebSocket
One of the more interesting technical problems was streaming container logs to the browser in real time. The naive approach — polling a REST endpoint every few seconds — introduces latency and hammers the database unnecessarily.
The solution is a WebSocket endpoint that reads directly from the Docker container's log stream:
@router.websocket("/ws/logs/{bot_id}")
async def stream_logs(websocket: WebSocket, bot_id: int):
await websocket.accept()
try:
container = docker_client.containers.get(f"bot_{bot_id}")
for log_line in container.logs(stream=True, follow=True, tail=50):
await websocket.send_text(log_line.decode().strip())
except Exception as exc:
await websocket.send_text(f"Stream error: {exc}")
finally:
await websocket.close()
The same pattern applies to build logs — when you deploy a new version of a bot, the Docker image build output streams directly to the browser line by line via a separate WebSocket endpoint.
Bot versioning with visual diff
Every time you save a new version of a bot's code, BotFarm stores the previous version and generates a diff. From the dashboard you can see the full history, compare any two versions side by side, and roll back to any previous version in one click.
The code editor is Monaco — the same editor that powers VS Code — embedded directly in the dashboard. You get syntax highlighting, autocomplete and error detection without leaving the browser.
Writing a bot
Bots are standard Python scripts with access to bot_logger, a shared library that handles logging and metrics:
import os
import json
from bot_logger import BotLogger
logger = BotLogger()
try:
creds = json.loads(os.environ.get("BOT_CREDENTIALS", "{}"))
logger.log("INFO", "Bot started")
records = process_data(creds)
logger.log("INFO", f"Cycle complete: {records} records processed")
logger.metric("records_processed", records)
finally:
logger.close(exit_code=0)
The finally block is important — logger.close() writes the final execution status to the database and releases resources. If it doesn't run, the dashboard shows the bot as still running.
Security model
A few decisions worth explaining:
JWT with automatic refresh. Tokens expire after one hour. The frontend automatically refreshes at the 50-minute mark without interrupting the user's session. On logout, the token's JTI is revoked in the database — subsequent requests with that token are rejected even if it hasn't expired.
Rate limiting on login. Ten attempts per minute per IP using slowapi. After that, requests are rejected with 429 until the window resets. Simple, effective, no CAPTCHA complexity.
Audit log as append-only. The database user that the application uses (botfarm_app) has INSERT and SELECT on the audit log table — never UPDATE or DELETE. This is enforced at the database privilege level, not just in application code. An audit log you can delete is not an audit log.
bcrypt with cost 12. Slow enough to make brute force impractical, fast enough that legitimate logins don't feel slow on modern hardware.
The stack
| Layer | Technology |
|---|---|
| Backend | FastAPI 0.115 · Python 3.12 · Uvicorn |
| Frontend | React 18 · Vite · Chart.js · Monaco Editor |
| Database | MariaDB 11 |
| Containers | Docker Engine · Docker Compose v2 |
| Proxy | Nginx (TLS 1.2/1.3) |
| Docker security | tecnativa/docker-socket-proxy |
| Encryption | AES-256-GCM · bcrypt · JWT HS256 |
| Platform | AlmaLinux 10 LTS |
What it doesn't do
BotFarm is infrastructure for managing bots, not a scraping framework. It doesn't handle proxy rotation, browser automation, or anti-detection. Those concerns live in the bot code itself — BotFarm just provides the container to run it in, the credentials to authenticate with, and the visibility to know when something breaks.
Top comments (0)