Table of Contents
- The Core Difference
- Self-Hosted n8n: Full Control, Full Responsibility
- n8n Cloud: Set It and Forget It
- Cost Breakdown: Where Your Money Goes
- Performance, Security, and Compliance
- Migration Paths and Lock-In
- Getting Started with Your Choice
The Core Difference
I've been running automation workflows for years now, and the self-hosted versus cloud decision feels bigger in 2026 than ever. Here's the honest truth: there's no universal winner. Your choice depends on whether you value control or convenience more—and how much operational overhead you're willing to carry.
Self-hosted n8n runs entirely on your own infrastructure. You manage the server, backups, updates, SSL certificates, and scaling.
n8n Cloud runs on n8n's infrastructure. You log in, build workflows, and n8n handles the rest.
The gap between these options has actually narrowed since 2024. Cloud has become more flexible, and self-hosted has become simpler. But the trade-offs are still real, and I want to walk you through them honestly.
Self-Hosted n8n: Full Control, Full Responsibility
I started self-hosting because I needed workflows that could run offline, required custom node integrations, and had compliance constraints. If any of those resonate with you, self-hosted might be your answer.
Setting Up Self-Hosted n8n
Let's deploy this on a Hetzner VPS or Contabo VPS. I'll use Docker Compose because it keeps dependencies isolated and makes updates painless.
First, spin up a VPS with at least 2GB RAM and Ubuntu 22.04. SSH in and run:
apt update
apt upgrade -y
apt install -y docker.io docker-compose curl wget
systemctl start docker
systemctl enable docker
usermod -aG docker $USER
newgrp docker
Now create your n8n directory and Docker Compose file:
mkdir -p ~/n8n-deployment
cd ~/n8n-deployment
nano docker-compose.yml
Paste this configuration:
version: '3.8'
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: n8n
POSTGRES_USER: n8n
POSTGRES_PASSWORD: your_secure_password_here
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 10s
timeout: 5s
retries: 5
n8n:
image: n8nio/n8n:latest
container_name: n8n
environment:
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_USER: n8n
DB_POSTGRESDB_PASSWORD: your_secure_password_here
DB_POSTGRESDB_DATABASE: n8n
N8N_HOST: your_domain.com
N8N_PORT: 5678
N8N_PROTOCOL: https
NODE_ENV: production
WEBHOOK_URL: https://your_domain.com/
GENERIC_TIMEZONE: UTC
ports:
- "5678:5678"
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
restart: unless-stopped
nginx:
image: nginx:alpine
container_name: n8n_nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
- ./certbot/conf:/etc/letsencrypt:ro
- ./certbot/www:/var/www/certbot:ro
depends_on:
- n8n
restart: unless-stopped
volumes:
postgres_data:
n8n_data:
Create your Nginx reverse proxy config:
nano nginx.conf
Add this:
events {
worker_connections 1024;
}
http {
upstream n8n {
server n8n:5678;
}
server {
listen 80;
server_name your_domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name your_domain.com;
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
client_max_body_size 50M;
location / {
proxy_pass http://n8n;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
}
}
For SSL certificates, I'll use Certbot with Let's Encrypt:
apt install -y certbot python3-certbot-nginx
certbot certonly --standalone -d your_domain.com --email your_email@example.com --agree-tos --non-interactive
Update your domain DNS records to point to your VPS IP, then start the containers:
docker-compose up -d
docker-compose logs -f n8n
Once you see "n8n ready on port 5678," you're up. Visit https://your_domain.com, set your admin credentials, and you're in.
💡 Fast-Track Your Project: Don't want to configure this yourself? I build custom n8n pipelines and bots. Message me with code SYS3-HUGO.
Self-Hosted Wins
Offline workflows: Your automations don't depend on external uptime.
Custom nodes: Build integrations for internal tools or legacy systems. Modify the n8n codebase if needed.
Data residency: All data stays on your server. Huge for GDPR, HIPAA, or SOC 2 compliance.
No execution limits: Run as many parallel workflows as your hardware can handle. No throttling, no surprise bills when usage spikes.
Unlimited storage: Store execution history forever. I have years of logs that help me debug production issues.
Cost at scale: If you're running 100+ workflows daily, self-hosted becomes cheaper than cloud per-execution pricing.
Self-Hosted Costs
- VPS: $10–20/month for 2GB RAM (Hetzner, Contabo, DigitalOcean)
- Domain: ~$10/year via Namecheap
- Time: Setup is 30 minutes. Maintenance (updates, backups, monitoring) is 2–3 hours monthly
- Total: ~$130–240/year in infrastructure + your labor
Self-Hosted Responsibilities
You own:
- Server monitoring and uptime (I use Uptime Robot, but that's extra)
- SSL certificate renewal (I automate this with certbot)
- Database backups (you must do this; n8n doesn't auto-backup your data)
- Security patching (Docker images need updates)
- Scaling if load increases
n8n Cloud: Set It and Forget It
I use n8n Cloud for clients, prototypes, and workflows that don't need self-hosted's features. The appeal is simple: I don't think about infrastructure.
Getting Started with n8n Cloud
Sign up at n8n Cloud. The free tier includes:
- Up to 10 active workflows
- 400 executions/month
- 30-day execution history
- One user account
- All node types and basic auth methods
Here's what you get out of the box:
{
"tier": "Free",
"workflows": 10,
"monthly_executions": 400,
"execution_history_days": 30,
"users": 1,
"support": "Community",
"uptime_sla": "No SLA",
"backups": "Automatic (14 days)"
}
Paid plans start at $20/month (Professional) and scale to $490/month (Enterprise). Each tier includes:
- More workflows and executions
- Longer history retention
- Multiple users and teams
- Priority support
- Custom domain for webhooks
n8n Cloud Strengths
Zero DevOps: I literally log in and build. Updates happen automatically. Zero downtime.
Multi-user teams: Invite colleagues, assign roles, collaborate on workflows.
Built-in monitoring: Execution logs, error alerts, performance dashboards.
Managed backups: n8n keeps 14 days of automatic backups. No manual work.
Webhook URLs out of the box: When you create a webhook trigger, n8n gives you a URL like https://n8n-instance.n8n.cloud/webhook/abc123. No custom domain setup needed.
Global CDN: Webhooks and API calls route through optimized infrastructure.
Compliance features: SSO, audit logs, IP whitelisting on higher tiers.
I also appreciate that n8n Cloud is great for building workflows that replace expensive SaaS—you can read my guide on 5 n8n workflows that replace $200/month in SaaS tools for practical examples.
n8n Cloud Trade-Offs
Execution limits: Free tier caps at 400/month. That's ~13 per day. If your workflow triggers once per minute, you'll hit this in hours.
Vendor lock-in: Exporting workflows is straightforward (JSON), but if n8n changes pricing or shuts down (unlikely, but theoretically), you're dependent on them.
Data residency: Your data lives on n8n's servers (AWS). Not ideal for GDPR-strict compliance.
No offline mode: If n8n Cloud is down, your webhooks won't process.
Shared infrastructure: In theory, noisy neighbors could affect your performance (though n8n isolates resources well).
Top comments (0)