DEV Community

Flavius Dinu
Flavius Dinu

Posted on • Originally published at Medium on

Self-Host n8n with Docker: Should You Do It?

Let’s be honest: Everyone is using n8n for automating their workflows. And I’m not going to tease you anymore, I’ll say that absolutely YES, you should self-host n8n on Docker.

However, if you’re unfamiliar with n8n, I've got you covered.

Think of n8n as the Swiss Army knife of workflow automation. Instead of wrestling with complex CI/CD tools for every little automation task, you get a visual, node-based interface that actually makes sense. Your team can build workflows without needing a PhD in DevOps. No more fighting with overcomplicated tools or reinventing the wheel every time you need to automate something.

In this guide, I’ll walk you through everything you need to know about getting n8n up and running with Docker. We’ll start simple with a local installation with Docker, and work our way up to a setup that deploys n8n on an EC2 instance in AWS using Terraform (you can use OpenTofu as well).

TL/DR? Check the video instead:

Prerequisites

For this tutorial you will need:

  • Docker & Docker Compose
  • Basic knowledge of the CLI and Git
  • An AWS account with permissions to create EC2 instances, security groups, and Route53 records
  • Terraform or OpenTofu

Local n8n installation with Docker-compose

Let’s start simple with a local installation with Docker-compose. For that you can use the Self-Hosted AI starter kit for that, build by the n8n team.

You will first need to clone the repository by using:

git clone git@github.com:n8n-io/self-hosted-ai-starter-kit.git
Enter fullscreen mode Exit fullscreen mode

Next, go to the directory containing the self-hosted-ai-starter-kit repository, and cp the .env.example to a new file called .env. You will then need to modify this .env file with values that make sense for your environment.

Depending on your operating system and your GPU, you can deploy n8n in multiple ways:

Nvidia GPU

docker compose --profile gpu-nvidia up
Enter fullscreen mode Exit fullscreen mode

AMD GPU

docker compose --profile gpu-amd up
Enter fullscreen mode Exit fullscreen mode

Everyone else:

docker compose --profile cpu up
Enter fullscreen mode Exit fullscreen mode

As I’m on a Mac with Apple Silicon I will use the latter command and in the end this is what the output looks like:

Attaching to n8n, n8n-import, ollama, ollama-pull-llama, qdrant, postgres-1
postgres-1 | 
postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres-1 | 
qdrant | _ _    
qdrant | _____ | |_ ____ _ _ __ | |_  
qdrant | / _` |/ _` | ' __/ _` | '_ \|__ | 
qdrant | | (_| | (_| | | | (_| | | | | |_  
qdrant | \ __, |\__ ,_|_| \ __,_|_| |_|\__ | 
qdrant | |_|                           
qdrant | 
qdrant | Version: 1.14.1, build: 530430fa
qdrant | Access web UI at http://localhost:6333/dashboard
qdrant | 
qdrant | 2025-07-07T18:56:16.373057Z INFO storage::content_manager::consensus::persistent: Loading raft state from ./storage/raft_state.json    
ollama | time=2025-07-07T18:56:16.383Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama | time=2025-07-07T18:56:16.386Z level=INFO source=images.go:476 msg="total blobs: 6"
ollama | time=2025-07-07T18:56:16.386Z level=INFO source=images.go:483 msg="total unused blobs removed: 0"
ollama | time=2025-07-07T18:56:16.388Z level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.5)"
ollama | time=2025-07-07T18:56:16.388Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
ollama | time=2025-07-07T18:56:16.394Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
ollama | time=2025-07-07T18:56:16.395Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="7.7 GiB" available="4.5 GiB"
postgres-1 | 2025-07-07 18:56:16.415 UTC [1] LOG: starting PostgreSQL 16.9 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 14.2.0) 14.2.0, 64-bit
postgres-1 | 2025-07-07 18:56:16.415 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres-1 | 2025-07-07 18:56:16.415 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres-1 | 2025-07-07 18:56:16.416 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres-1 | 2025-07-07 18:56:16.421 UTC [29] LOG: database system was shut down at 2025-07-06 11:59:00 UTC
postgres-1 | 2025-07-07 18:56:16.425 UTC [1] LOG: database system is ready to accept connections
qdrant | 2025-07-07T18:56:16.431182Z INFO qdrant: Distributed mode disabled    
qdrant | 2025-07-07T18:56:16.431604Z INFO qdrant: Telemetry reporting enabled, id: d33bc0ef-1661-4d58-8b03-f1a751fc0c2f    
qdrant | 2025-07-07T18:56:16.432002Z INFO qdrant: Inference service is not configured.    
qdrant | 2025-07-07T18:56:16.577110Z INFO qdrant::actix: TLS disabled for REST API    
qdrant | 2025-07-07T18:56:16.577167Z INFO qdrant::actix: Qdrant HTTP listening on 6333    
qdrant | 2025-07-07T18:56:16.577376Z INFO actix_server::builder: starting 5 workers
qdrant | 2025-07-07T18:56:16.577395Z INFO actix_server::server: Actix runtime found; starting in Actix runtime
qdrant | 2025-07-07T18:56:16.577407Z INFO actix_server::server: starting service: "actix-web-service-0.0.0.0:6333", workers: 5, listening on: 0.0.0.0:6333
qdrant | 2025-07-07T18:56:16.607417Z INFO qdrant::tonic: Qdrant gRPC listening on 6334    
qdrant | 2025-07-07T18:56:16.607430Z INFO qdrant::tonic: TLS disabled for gRPC API    
ollama | [GIN] 2025/07/07 - 18:56:19 | 200 | 802.291µs | 172.24.0.5 | HEAD "/"
ollama | [GIN] 2025/07/07 - 18:56:20 | 200 | 690.502584ms | 172.24.0.5 | POST "/api/pull"
pulling manifest 
ollama-pull-llama | pulling dde5aa3fc5ff: 100% ▕██████████████████▏ 2.0 GB                         
ollama-pull-llama | pulling 966de95ca8a6: 100% ▕██████████████████▏ 1.4 KB                         
ollama-pull-llama | pulling fcc5a6bec9da: 100% ▕██████████████████▏ 7.7 KB                         
ollama-pull-llama | pulling a70ff7e570d9: 100% ▕██████████████████▏ 6.0 KB                         
ollama-pull-llama | pulling 56bb8bd477a5: 100% ▕██████████████████▏ 96 B                         
ollama-pull-llama | pulling 34bb5ab01051: 100% ▕██████████████████▏ 561 B                         
ollama-pull-llama | verifying sha256 digest 
ollama-pull-llama | writing manifest 
ollama-pull-llama | success 
Enter fullscreen mode Exit fullscreen mode

Now, you can access n8n directly on http://localhost:5678.

Next, you can run the demo workflow that just chats with Ollama:

N8n makes it easy to import workflows directly by using json files, so what you can do is head out to this link, and choose one of the workflows.

I’ve chosen the first one in the list and then I clicked on Use for Free :

Then you get a couple of different options:

Select Copy template to clipboard [JSON] and save it to a file on your computer.

Next, go to your n8n instance and click on create workflow:

Now you can select the three dots in the top right corner and select import from file:

Select the file you’ve created before and the workflow gets imported automatically:

Using Terraform to deploy n8n on AWS EC2 with Docker

I’ve put together a simple Terraform configuration that lets you deploy n8n on AWS without much hassle, making it easy to have the server up and running non stop to run all the workflows you need.

You can find the code here, make sure you clone or fork it.

The configuration is pretty straight forward to use, you will just need to provide a couple of variables using terraform.tfvars, environment variables or a secrets management solution to ensure everything goes smooth:

public_key_path                
create_dns_record              
domain_name                    
key_name                       
basic_auth_password            
n8n_encryption_key             
n8n_user_management_jwt_secret 

Enter fullscreen mode Exit fullscreen mode

By setting the create_dns_record to true you will be able to access n8n directly from your domain (right now only through http).

Also, the public_key_path should be the path to your ssh public key so make sure you have access to the private key as well in order to connect to the instance via ssh (you don’t really need to connect via ssh to it, but I’ll show how to track something if you do).

I’m using by default a t2.micro instance, so in this case, the AI capabilities won’t work due to RAM limitations, but you can still import other workflows to n8n as I’ve shown above.

NOTE : The solution can be extended based on the demand to get closer to be production ready. You can use an infrastructure orchestration platform or a generic CI/CD to help you with the deployment. You should also implement remote state if you want to use it in production.

As soon as you populate the variables, you can run:

terraform init

Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v6.2.0...
- Installed hashicorp/aws v6.2.0 (signed by HashiCorp)

...

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Enter fullscreen mode Exit fullscreen mode

After you initialize the configuration you can run:

terraform plan

Plan: 4 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + n8n_dns_record = (known after apply)
  + n8n_instance_public_ip = (known after apply)
Enter fullscreen mode Exit fullscreen mode

Four resources will be created: an EC2 instance, a security group, a key pair, and optionally a dns record. The n8n installation is handled in the cloud-init script that can be found here.

As soon as you’re happy with the plan you can run:

terraform apply
Enter fullscreen mode Exit fullscreen mode

After a couple of minutes, your resources will be up and running, but you will still have to be patient for a couple more minutes for the cloud init script to fully deploy.

If you are impatient as I certainly am, you can check in real time what is happening with the cloud init script by ssh-ing to the instance and doing a:

tail -f /var/log/cloud-init-output.log
Enter fullscreen mode Exit fullscreen mode

As soon as the script finishes you will be able to access n8n via:

  • ec2_public_ip:5678
  • your_record_name:5678 (if you set up a dns record)

You can run a terraform output command to see what are the defined outputs.

Key points

If you’re serious about automating your workflows without any limitations, self-hosting n8n with Docker is the way to go.

The process is smooth, scalable, and entirely under your control.

With this setup, you’re not only increasing flexibility, but you’re also building automation on your terms, which let’s be honest, it’s fun.

So yes, you should self-host n8n on Docker, and now, you know exactly how.

Top comments (0)