I really love automating
I’ve always loved automating things in my workflows.
When I stumbled on n8n, I was honestly in awe. Suddenly all the ideas I had before—like updating Google Sheets, posting to social media, hosting my own APIs, syncing Google Drive to an S3 bucket, and a lot more—became realistic without needing to learn a new framework or library for every single task.
n8n makes it much easier to build small automations that actually ship and help you in day‑to‑day work, and that’s what made me fall in love with it.
So we’ll walk through how to deploy n8n on an AWS EC2 instance so you can start running your own automations on your own infrastructure.
Launching the Instance 🚀
Step 1: Launch an Instance
- Log in to your AWS Management Console.
- Navigate to the EC2 Dashboard.
- Click the Launch instance button.
Step 2: Name and OS Selection
- Name: Give your instance a recognizable name, such as
n8n. - Application and OS Images (AMI): Select Amazon Linux. The default "Amazon Linux 2023 AMI" is a great choice and is eligible for the Free Tier.
- Architecture: Select
64-bit (ARM)for better cost efficiency on AWS (Graviton processors offer better performance-per-dollar).
Step 3: Choose an Instance Type
Select an instance type that suits your needs. For this tutorial, we are using t4g.medium (ARM-based Graviton). Here's why t4g.medium is the most cost-efficient choice for n8n:
- Better Performance-per-Dollar: AWS Graviton2 processors offer 40% better price-to-performance than comparable x86 instances
- Sufficient Resources: 2 vCPUs and 4GB RAM handle n8n with multiple workers smoothly without over-provisioning
- Burstable Performance: T4g instances include CPU credits, allowing you to handle traffic spikes without constant high usage
- Lower Costs: t4g.medium costs ~40% less than equivalent t3.medium with superior performance
- Production-Ready: Unlike t4g.micro (limited burstable capacity), t4g.medium can sustain moderate workflows continuously
For comparison:
- t4g.micro: Limited for production (Free Tier only, ~1 vCPU, 1GB RAM)
- t4g.medium: Recommended for small-to-medium deployments (2 vCPUs, 4GB RAM)
- t4g.large: For high-volume workflows (2 vCPUs, 8GB RAM)
Step 4: Key Pair
Under Key pair (login), select an existing key pair from the dropdown menu to ensure you can SSH into your server later. If you don't have one, click "Create new key pair".
Step 5: Network Settings (Security Groups)
This is a crucial step to ensure your instance is accessible.
- Under Network settings, choose Create security group.
- Ensure the following rules are checked/added:
| Rule Type | Port | Protocol | Source | Purpose |
|---|---|---|---|---|
| SSH | 22 | TCP | Your IP (0.0.0.0/0 for testing only) | Remote terminal access |
| HTTP | 80 | TCP | 0.0.0.0/0 | Web traffic (redirect to HTTPS) |
| HTTPS | 443 | TCP | 0.0.0.0/0 | Secure web traffic for n8n UI |
Step 6: Configure Storage
The default 8 GiB of gp3 storage is usually sufficient for a basic installation. You can leave this as is.
Step 7: Advanced Details - User Data (The Installation Script)
This is the most important part of the automation. Instead of manually installing software after the server boots, we will provide a script to do it automatically.
- Scroll down to the Advanced details section.
- Scroll to the very bottom to find the User data text field.
- Paste the following script. This script updates the system, installs Docker, sets up Docker Compose, and configures permissions.
#!/bin/bash
yum install -y git docker
# Install Docker Compose plugin (system-wide)
mkdir -p /usr/local/lib/docker/cli-plugins
curl -SL "https://github.com/docker/compose/releases/latest/download/docker-compose-linux-$(uname -m)" \
-o /usr/local/lib/docker/cli-plugins/docker-compose
chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
# Enable and start Docker
systemctl enable docker
systemctl start docker
# Allow ec2-user to run docker without sudo
usermod -aG docker ec2-user
Highlight: The script above automates the installation of Git, Docker, and Docker Compose, saving you several manual command-line steps later.
Step 8: Launch
- Review your settings in the Summary panel on the right.
- Click Launch instance.
- Wait for the success message, then click the Instance ID to view your running server.
Connecting and Configuring 🔌
Now that your instance is running, we need to connect to it and download the necessary n8n configuration files.
Step 1: Connect to your Instance
- Select your running instance from the list.
- Click the Connect button at the top right of the console.
- Select the EC2 Instance Connect tab.
- Leave the default username as
ec2-user. - Click the orange Connect button. A new browser window will open with a terminal interface.
Step 2: Verify Installation
Once the terminal loads, verify that the installation script from Part 1 ran successfully by checking the Docker Compose version:
docker compose --version
If this command returns a version number, your environment is ready.
Step 3: Download the n8n Setup
We will use a pre-configured setup from GitHub to get n8n running quickly.
-
Clone the repository:
git clone https://github.com/coozgan/hosting-n8n-aws.git Navigate into the project directory:
-
Navigate into the project directory:
cd hosting-n8n-aws
Deploying n8n with Workers 🐳
Now we'll set up a production-ready n8n deployment with distributed workers, Redis queuing, PostgreSQL persistence, and Caddy reverse proxy for SSL/TLS.
Step 1: Review Configuration
Review the architecture. This setup includes:flow triggers
- n8n Workers: Execute workflows asynchronously from a job queue
- Redis: Manages job distribution and retries
- PostgreSQL: Stores workflows, executions, and user data
- Caddy: Reverse proxy with automatic SSL/TLS certificates
Step 2: Configure Environment Variables
-
Create and edit the
.envfile:
cp .env-example .env nano .enva. Create a DuckDNS subdomain
Go to https://www.duckdns.org and sign in with GitHub, Google, etc.b. On the main page, choose a subdomain name (for example, myn8n) and click add domain.
c. Find your ip address head back to instance connect and type the command, then copy the IP Address;
curl ifconfig.med. paste the IP Address duckdns.org website and add the IP Address.
-
Add the following essential variables:
# Domain Configuration DOMAIN=your-domain.com #use localhost if you don't have domain # n8n Encryption (generate a strong random key) N8N_ENCRYPTION_KEY= CHOOSE_YOUR_OWN_KEY # PostgreSQL Database POSTGRES_PASSWORD=CHOOSE_YOUR_OWN_PASSWORD # Timezone GENERIC_TIMEZONE=America/New_York Press
Ctrl+Xto save, thenYandEnterto exit nano.
Important: Keep your
N8N_ENCRYPTION_KEYandPOSTGRES_PASSWORDsafe. If you lose them, you won't be able to recover your workflows or data.
Step 3: Start the Services
- Press
Ctrl+Xto exit, then pressYto confirm save, andEnterto finalize.
Important: Keep your
N8N_ENCRYPTION_KEYandPOSTGRES_PASSWORDsafe. If you lose them, you won't be able to recover your workflows or data.
Step 3: Start the Services
-
Verify all containers are running:
docker compose ps -
Check the logs to ensure everything started correctly:
docker compose logs -f
Step 4: Access Your n8n Instance
-
Once services are healthy, navigate to your domain in a web browser:
- If you configured a domain:
https://your-domain.com - If testing without domain:
http://<instance_public_ipaddress>
- If you configured a domain:
You should see the n8n login screen. Create your first user account.
Congratulations! You have yourself a n8n single‑instance.
Architecture Overview
Scaling and Best Practices ⚡
Scaling Horizontally (within the instance)
To add more worker instances, update your docker-compose.yml:
n8n-worker-2:
extends: n8n-worker
container_name: n8n-worker-2
... #copy the rest of the code
n8n-worker-3:
extends: n8n-worker
container_name: n8n-worker-3
... #copy the rest of the code
Then restart:
docker compose up -d
For more information follow this... 👈
Performance Optimization Tips
- Adjust Worker Count: Start with 2-3 workers and monitor CPU/memory usage.
docker stats
-
Database Tuning: Prune old execution logs to keep PostgreSQL performant:
- The docker-compose includes
EXECUTIONS_DATA_PRUNE=true - It keeps 7 days of history by default (
EXECUTIONS_DATA_MAX_AGE=168)
- The docker-compose includes
-
Redis Configuration: The Redis instance has a 512MB memory limit with LRU eviction policy.
- Monitor with:
docker exec redis redis-cli info stats
- Monitor with:
Security Best Practices
-
Enable User Management:
- The setup has
N8N_USER_MANAGEMENT_DISABLED=falseby default - Create separate user accounts for team members
- The setup has
-
Restrict SSH Access:
- Update your security group to only allow SSH from your IP
- In AWS Console: Security Groups > Inbound Rules > Edit SSH rule
-
Backup Strategy:
- Regularly backup your PostgreSQL database:
docker exec postgres pg_dump -U postgres postgres > backup.sql
-
Backup your n8n data volume:
docker run --rm -v n8n-with-workers_n8n-data:/data \ -v $(pwd):/backup alpine tar czf /backup/n8n-backup.tar.gz \ -C /data .
-
SSL/TLS Certificates:
- Caddy automatically obtains and renews Let's Encrypt certificates
- Certificates are stored in the
caddy-datavolume
Troubleshooting
Issue: Cannot Connect to n8n solution
1️⃣ Check if all services are running:
docker compose ps
2️⃣ Check Caddy logs for SSL certificate issues:
docker compose logs caddy
3️⃣ Ensure your security group allows HTTP (port 80) and HTTPS (port 443)
Issue: Workers Not Processing Jobs Solution
1️⃣ Verify Redis is healthy:
docker exec redis redis-cli ping
2️⃣ Check worker logs:
docker compose logs n8n-worker
3️⃣ Ensure EXECUTIONS_MODE=queue is set in your environment
Issue: Database Connection Errors Solution
1️⃣ Verify PostgreSQL is running and healthy:
docker compose logs postgres
2️⃣ Check database credentials match in .env:
docker exec postgres psql -U postgres -d postgres -c "\dt"
3️⃣ Ensure POSTGRES_PASSWORD in your .env is correct
Issue:Out of Disk Space Solution
1️⃣ Check disk usage:
docker system df
2️⃣ Prune old data:
docker system prune -a
docker volume prune
3️⃣ Clean up old execution logs in n8n UI: Settings → Executions → Delete old executions
Monitoring and Maintenance
Monitor Container Health
# Real-time resource usage
docker stats
# View service logs
docker compose logs -f [service-name]
# Check specific service health
docker compose ps
Regular Maintenance Tasks
Weekly:
- Monitor disk usage:
df -h - Check error logs:
docker compose logs --since 1w | grep -i error
Monthly:
- Backup databases and volumes
- Review user access and remove inactive accounts
- Update Docker images:
docker compose pull && docker compose up -d
Quarterly:
- Security audit of your workflows
- Review and optimize your AWS security groups
- Test disaster recovery (restore from backup)
Cost Optimization on AWS
-
Instance Type: t4g.medium provides the best balance of cost and performance for n8n deployments
- ARM-based Graviton2 processors (40% better price-to-performance than x86)
- Sufficient resources for multiple workers without over-provisioning
- Monthly cost: ~$9-12 (vs ~$15-18 for equivalent x86 instances)
- Data Transfer: Minimize data egress by keeping compute and data in the same region
- RDS Alternative: For high-volume deployments, consider AWS RDS for PostgreSQL instead of self-hosted
- CloudWatch: Set up alarms for unusual activity
- Auto-Scaling: Use EC2 Auto Scaling Groups for multiple instances in production
Next Steps:make it production-ready
This guide focuses on getting n8n running quickly on EC2 so you can start experimenting. For a more production-ready setup, you’ll likely want to:
- Use serverless deployment (ECS Fargate, Elasticache and RDS)
- Use a custom domain with Route 53 or another DNS provider
- Use Elastic Load Balancer and Automatic Scaling Group on ECS Clusters
- Configure environment variables for n8n (like WEBHOOK_URL, credentials, etc.)
Those topics can be a follow-up post in this series, where the EC2 instance you just created becomes the base for a more secure, robust n8n deployment.
Conclusion 🎉
You now have a production-grade n8n automation platform running on AWS with:
- ✅ Cost effective to small to medium automations
- ✅ Scalable worker architecture on ARM-based Graviton processors
- ✅ Persistent data storage with PostgreSQL
- ✅ Automatic SSL/TLS encryption via Caddy
- ✅ Job queue system with Redis for reliable executions
- ✅ Easy horizontal scaling within your instance



Top comments (0)