Kamal (from Basecamp/37signals) deploys Docker containers to any server. No Kubernetes. No cloud vendor lock-in. Just SSH and Docker.
config/deploy.yml
service: scraping-app
image: username/scraping-app
servers:
web:
- 192.168.1.100
- 192.168.1.101
worker:
hosts:
- 192.168.1.102
cmd: node worker.js
registry:
server: ghcr.io
username: username
password:
- KAMAL_REGISTRY_PASSWORD
env:
clear:
NODE_ENV: production
PORT: 3000
secret:
- DATABASE_URL
- API_KEY
traefik:
options:
publish:
- "443:443"
volume:
- "/letsencrypt:/letsencrypt"
args:
entryPoints.websecure.address: ":443"
certificatesResolvers.letsencrypt.acme.email: "admin@example.com"
certificatesResolvers.letsencrypt.acme.storage: "/letsencrypt/acme.json"
accessories:
db:
image: postgres:16
host: 192.168.1.100
port: 5432
env:
clear:
POSTGRES_DB: scraping
secret:
- POSTGRES_PASSWORD
volumes:
- /var/lib/postgresql/data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
host: 192.168.1.100
port: 6379
Deploy Commands
# First deploy — sets up everything
kamal setup
# Deploy new version
kamal deploy
# Rolling restart (zero downtime)
kamal app boot
# View logs
kamal app logs
kamal app logs --follow
# Run command on server
kamal app exec 'node scripts/migrate.js'
# Console access
kamal app exec -i 'node'
Zero-Downtime Deploys
Kamal uses Traefik as a reverse proxy. New container starts → health check passes → traffic switches → old container stops. Zero downtime.
Secrets Management
# Set secrets from .env
kamal env push
# .env file
KAMAL_REGISTRY_PASSWORD=ghp_xxx
DATABASE_URL=postgres://user:pass@db:5432/scraping
API_KEY=sk-xxx
Multi-App Deploy
# Deploy web + worker + cron
servers:
web:
- 192.168.1.100
worker:
hosts: [192.168.1.101]
cmd: node worker.js
cron:
hosts: [192.168.1.101]
cmd: node cron.js
Deploy scraping infrastructure? My Apify tools run anywhere Docker runs.
Custom deployment? Email spinov001@gmail.com
Top comments (0)