As a tech enthusiast, Iโve always been fascinated by the potential of AI automation. This week, I decided to roll up my sleeves and build something powerful; a fully self-hosted AI automation lab, right from my Windows HP PC.
Using the n8n Self-hosted AI Starter Kit, I configured a local stack that combines:
๐ง Ollama, which powers the local AI engine by running large language models like LLaMA 3 directly on my machine โ fast, private, and cloud-free
๐๏ธ PostgreSQL, serving as n8nโs database to reliably store workflows, credentials, and execution logs
โ๏ธ n8n as the automation engine and workflow editor
๐งฐ Docker, Git Bash, and VS Code to orchestrate and customize the entire environment
๐ Qdrant โ vector database for semantic search and memory
Here's why self-hosting n8n caught my attention:
- No Subscription Fees: Enjoy the features of n8n without recurring costs.
- Total Control: Manage your data, workflows, and security with ease.
- Scalability: Run unlimited workflows without restrictions.
- Customization: Tailor n8n to fit your specific needs with custom nodes and integrations.
In this article, I'll share my experience setting up my self-hosted n8n AI automation lab and walk you through the process.
โ Prerequisites
Make sure you have:
- Docker and Docker Compose v2+
- Git
- Terminal access
- Web browser
- At least 4GB RAM (8GB+ recommended) and 10GB+ disk space
My Setup and Tools
- OS: Windows 10/11 PC
- Terminal: Git Bash
- Editor: Visual Studio Code
- Docker: via Docker Desktop
- Browser: Chrome (for n8n dashboard)
- Hardware: Intel HD Graphics 520 (CPU-only)
Configuration Guide
Step 1 โ Clone the repo
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
First run: Docker pulls required images, creates volumes/network and creates containers.
Database migrations and n8n-import logs showing tables and migrations being applied
Step 2 โ Configure .env
I opened .env
in VS Code and added my secrets (dummy values were used for sensitive keys):
env
POSTGRES_USER="root"
POSTGRES_PASSWORD="examplepass123"
POSTGRES_DB="n8n"
N8N_ENCRYPTION_KEY="examplekey123"
N8N_USER_MANAGEMENT_JWT_SECRET="examplejwtsecret123"
N8N_DEFAULT_BINARY_DATA_MODE="filesystem"
N8N_RUNNERS_ENABLED=true
N8N_BLOCK_ENV_ACCESS_IN_NODE=false
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
๐ก Pro tip: Always quote your secrets and escape $
signs, or Docker will misinterpret them.
Step 3 โ Start the Setup
docker compose --profile cpu up
Re-running docker compose --profile cpu up, confirming services (Postgres, Qdrant, Ollama, n8n) are re-initialized
This spun up:
n8n
- PostgreSQL
- Qdrant
- Ollama
n8n-import
โ ๏ธ Common Errors and Fixes
- ** Environment Variable Warnings** โ fixed by quoting + escaping
$
- ** Network conflicts** โ stopped all containers and removed the old network
- n8n stuck on startup โ checked logs, restarted, and it finally recovered
โ Accessing the Dashboard
Once everything was up, I visited:
http://localhost:5678
โฆand boom ๐ฅ โ the n8n editor was live!
๐ Restarting after Reboots
Each time I turn on my PC, I simply run:
cd ~/self-hosted-ai-starter-kit
docker compose --profile cpu up
๐ Optional Startup Script
To make things easier, I created a file called start-n8n.sh:
#!/bin/bash
cd ~/self-hosted-ai-starter-kit
docker compose --profile cpu up
Then I ran:
./start-n8n.sh
You may need to make it executable:
chmod +x start-n8n.sh
๐ For Others Testing This
- Mac (Apple Silicon)
docker compose --profile mac up
- Linux (AMD GPU)
docker compose --profile gpu-amd up
- Nvidia GPU
docker compose --profile gpu-nvidia up
Whatโs Next
Setting up my self-hosted AI automation lab with n8n, Ollama, and PostgreSQL has been a rewarding experience. It gave me a reliable and flexible platform to build intelligent workflows without relying on cloud services or subscriptions. Everything runs locally, giving me full control over my data and processes.
Whether you're building smart assistants, automating tasks, or experimenting with local LLMs, this starter kit is a great place to begin. If you're curious about how it works or want to dive deeper, I recommend checking out n8nโs official documentation. Itโs clear, practical, and built to support creators at every level.
Iโll be sharing more soon as I build my next n8n project. Feel free to follow, engage, and stay tuned for whatโs coming next.
Top comments (0)