DEV Community

Cover image for How I built a self-hosted AI automation stack without losing my mind
<devtips/>
<devtips/>

Posted on

How I built a self-hosted AI automation stack without losing my mind

Forget overpriced no-code tools. Here’s how I wired up n8n, Docker, and open-source AI to automate boring tasks and accidentally became my own AI assistant.

Press enter or click to view image in full size
How I built a self-hosted AI automation stack

So you want to automate things like a nerd

If you’ve ever opened 12 browser tabs, copy-pasted the same thing into 4 different apps, and thought, “There has to be a better way” congrats, you’ve unlocked the automation itch.

The problem is, automation tools are either:

  • Expensive no-code platforms that make you pay per click like it’s 2010
  • Complicated APIs duct-taped together in your terminal
  • Or “AI” platforms that promise magic and deliver confusion

I wanted something in between. A developer-friendly setup where I could:

  • Run my own automations without monthly bills
  • Use AI to handle repetitive stuff
  • Build workflows without feeling like I’m programming a spaceship

That’s when I discovered the power combo: n8n + Docker + MCP (Mass Code Prompting) + a few open-source AI tools. Toss in some YAML, and suddenly I had a self-hosted stack that actually worked. No subscriptions, no black-box magic, no rate limits that make you cry.

This article is a step-by-step walkthrough of how I built it starting from nothing and ending with a dev-powered automation system that handles content, code, and tasks while I mostly drink coffee and monitor logs like a boss.

Let’s break it down.

Table of contents:

  1. What’s n8n, and why devs love it
  2. The tech stack we’re building
  3. Setting up n8n with Docker
  4. Automating AI pipelines with MCP
  5. Wiring it all together with n8n
  6. One thing that tripped me up and how I fixed it)
  7. Why this beats no-code platforms like Zapier
  8. Where to go from here
  9. Helpful links and nerdy resources
  10. Conclusion: automate or evaporate

What’s n8n and why devs love it

Let’s talk about the backbone of this whole automation setup: n8n (pronounced “n-eight-n”, but you’ll call it “the Node thing” for weeks).

At first glance, n8n looks like just another Zapier clone. Draggy blocks, flow lines, webhook triggers you know the drill. But under the hood? It’s what happens when a developer says: “I want control. And self-hosting. And no API call limits that cost my rent.”

Why n8n isn’t just “open-source Zapier”

Here’s the quick breakdown:

Press enter or click to view image in full size

You can literally run JavaScript inside a node, connect APIs however you want, and debug with actual logs. No weird workarounds, no “premium feature only” alerts. Just you, your logic, and infinite nerd power.

Example use cases that made me switch:

  • Scrape something → run through GPT → format → save to Notion
  • Monitor a folder → compress new files → email yourself a zip
  • Build entire publishing pipelines with zero manual input

And here’s the best part: n8n is built to run inside Docker, which makes it insanely portable and easy to deploy. You can spin it up locally, throw it on a VPS, or host it on a Raspberry Pi next to your 3D printer. It’s that flexible.

In this article, we’ll use Docker to run n8n alongside a few open-source AI tools and wire them together into one powerful automation brain.

Next up: what we’re actually building. Stack time.

The tech stack we’re building

Before we dive into terminal commands and JSON spaghetti, let’s zoom out and see what we’re actually building.

We’re going to connect a few powerful tools into one self-hosted automation brain that looks something like this:

Stack Overview:

Press enter or click to view image in full size

What this actually does

Imagine a workflow where:

  1. You drop a markdown file into a folder
  2. It triggers a workflow via n8n
  3. That file gets processed by MCP using GPT-4 (or a local model)
  4. It returns cleaned-up output or code
  5. n8n then posts it to your blog, sends a Slack message, or pushes it to GitHub

All of this happens without you clicking a single button. That’s the goal: automate boring things using smart tools, in a way that’s fully yours.

No vendor lock-in. No black boxes. Just containers, code, and good ol’ API calls.

Press enter or click to view image in full size

Setting up n8n with Docker

Alright, let’s get n8n up and running in a clean Docker container. No need to install random binaries on your OS or deal with weird path issues. Just containers the dev way.

If you’ve never used Docker before, don’t worry. You’ll basically write one config file, run one command, and boom self-hosted automation brain online.

Step 1: Create a project folder

mkdir n8n-stack && cd n8n-stack

Inside this folder, we’ll throw everything: your Docker config, data volumes, maybe even logs if you’re into that kind of thing.

Step 2: Write your docker-compose.yml

Create a file called docker-compose.yml and paste this:

version: "3.8"

services:
n8n:
image: n8nio/n8n
restart: always
ports:
- "5678:5678"
environment:
- GENERIC_TIMEZONE=UTC
- TZ=UTC
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=yourstrongpassword
volumes:
- ./n8n_data:/home/node/.n8n

Change that password unless you want some random stranger automating your life.

Step 3: Launch it

Now run:

docker compose up -d

Give it a few seconds, then head to http://localhost:5678 in your browser.

If all went well, you should see the glorious n8n editor a clean UI where you can start building flows that do your bidding.

Optional: Use Watchtower for auto-updates

If you’re planning to leave this running on a server, set up Watchtower to automatically pull the latest version of n8n:

watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: always

Because nothing says “I’m a dev with my life together” like automated container updates.

You now have n8n running locally or on your own server no cloud dependencies, no vendor pricing plans, and no nonsense.

Automating AI pipelines with MCP

Now that n8n is humming along in its Docker container like a well-fed server daemon, let’s talk about the brains behind our automation: MCP, aka Mass Code Prompting.

Don’t let the name intimidate you. It sounds like something you’d install on a GPU cluster at NASA, but it’s basically just a smart, scriptable wrapper that lets you run LLM prompts at scale. Think: shell scripting, but for AI.

So what does MCP actually do?

MCP is a CLI-first tool that helps you:

  • Run prompts against local or cloud-based AI models (like OpenRouter, LM Studio, Ollama, etc.)
  • Pipe input/output between files and models
  • Handle JSON, Markdown, or text in bulk
  • Chain tasks like a developer, not like a confused end-user

It’s like giving GPT a Linux mustache and saying: “Here, automate my workflow like it’s 1999.”

A simple example workflow

Let’s say you’ve got a folder full of .md files that need to be summarized, reformatted, and prepped for your blog.

With MCP, you can write a simple config like this:

mcp run summarize.yaml

And the summarize.yaml file might look like:

input_dir: ./input
output_dir: ./output
model: openrouter/openai/gpt-4
prompt: >
Summarize this Markdown file in a friendly, technical tone. Output in the same format.

MCP loops through your files, runs each through the model, and spits out clean summaries.

Why it’s awesome for dev automation

  • You can plug it into cron jobs, n8n, or shell scripts
  • It works with both cloud APIs and local models like LM Studio
  • It speaks YAML and bash, not mystery GUI buttons

Pro tip: If you’re feeling fancy, you can also pass dynamic prompts from n8n using exec nodes or webhook triggers. More on that next.

In short: MCP makes AI useful for real tasks. It doesn’t just answer questions it works through files, content, data and plays nicely with n8n.

Next, we’ll glue it all together and build a workflow where you drop a file into a folder, and it auto-summarizes, formats, and sends it to the next step. No hands needed.

How I wired it all together with n8n

This is where the magic happens. Now that you’ve got n8n running and MCP doing smart AI things, it’s time to link them together into an actual working pipeline.

Think of n8n as your automation router it catches a trigger (like a new file), runs it through MCP (your local AI engine), and handles the output (maybe publishes it, maybe throws it into Slack, maybe sends it to your Notion dungeon).

Example: Automating content cleanup

Let’s walk through a real-world example: I drop a .md file into a folder, and it gets automatically cleaned, summarized, and emailed to me.

Here’s what happens:

  1. Trigger: n8n watches a folder (via cron or webhook)
  2. Script: It sends the file path to a shell node that runs mcp run summarize.yaml
  3. Process: MCP generates a summary using a local or cloud AI model
  4. Output: n8n picks up the cleaned output and emails it via SMTP or drops it into Slack

n8n nodes involved:

  • Trigger: Cron, Webhook, or File Watcher (custom shell command)
  • Execute Command Node: Calls MCP CLI script with dynamic input
  • Read File Node: Loads the output after MCP finishes
  • Slack/Email Node: Sends you the goods

Example node setup in n8n:

Command:
mcp run summarize.yaml --input ./inputs/{{ $json["filename"] }}

Then in the next node, you can read the file from the output directory and do something with it.

Bonus tip: Add delays or retries

Sometimes AI tools take a few seconds. Add a small wait node in n8n to give MCP time to generate output, or use an "if file exists" loop.

Press enter or click to view image in full size

Once this setup is live, you can build basically anything:

  • Auto-format code commits
  • Summarize logs and email daily reports
  • Turn tweets into blog drafts
  • Clean up documents and post to CMS

You’re no longer just automating. You’re running a miniature dev-powered AI assistant.

Next up: the part where things broke. And how I fixed it without rage quitting.

One thing that tripped me up (and how I fixed it)

Everything was running smoothly until it wasn’t.

At one point, I had set up the whole pipeline: n8n triggers were firing, MCP was processing files, but… nothing was happening after the AI output step. No Slack message, no email, no glorious victory.

Instead, I got: “File not found” errors. Classic.

The real culprit: Docker volumes

Here’s what was happening:

  • MCP was writing output to a volume path like /app/output/
  • n8n, running in its own container, was trying to read from /home/node/output/
  • But they weren’t sharing the same volume in docker-compose.yml

So n8n was like: “What file? I’ve never seen that file in my life.”

Press enter or click to view image in full size

The fix: shared volumes FTW

In your docker-compose.yml, you need to mount the same folder for both services, like this:

volumes:
- ./shared_output:/data/output

Then in both n8n and MCP containers, reference /data/output as the directory. That way, they’re both reading/writing from the same spot like civilized processes.

Lesson learned

Containers are like roommates with separate fridges.
If you don’t explicitly share your food (or folders), nobody gets a snack.

Always double-check your volumes when one container “can’t find” what another one just made. You’re not cursed you’re just running isolated filesystems.

Once I fixed that, the automation ran like butter. n8n picked up the output, forwarded it, and I could finally sit back while my AI workflows did their job.

Next, I’ll explain why this whole setup beats the pants off no-code tools like Zapier or Make especially if you speak bash and not buzzwords.

Why this beats no-code platforms like Zapier

Let’s be honest no-code tools are great… until they’re not.

They lure you in with shiny buttons and “5-minute workflows,” and then hit you with:

  • Pricing tiers that feel like they’re billing you per neuron fired
  • Execution limits that make you afraid to test anything
  • Vendor lock-in so tight you need an API crowbar to escape
  • Debugging nightmares with no logs, just “something went wrong”

I’ve used Zapier, Make, and the rest. They’re fine for non-technical teams or quick experiments. But if you’re a developer who prefers vim over visual builders and likes knowing what the hell is going on, then n8n + Docker + open-source tools win. Every. Time.

Here’s why this setup rules:

Full control

Everything runs in your environment your machine, your server, your rules. Want to add a Python script mid-flow? Go for it. Want to run 10,000 prompts in parallel? Nobody’s stopping you.

No subscription tax

This setup costs you exactly $0/month (unless you’re paying for external APIs like OpenRouter). You’re not locked into arbitrary usage caps or “premium nodes” nonsense.

Real debugging

n8n has logs. Docker has logs. Your CLI tools have logs. You’re not stuck staring at a spinning Zapier icon wondering if your data fell into a void.

Reusable workflows

Because it’s all code/config-based, you can back it up, version it, and reuse it. You don’t have to rebuild flows every time you want to change a single parameter.

Bonus: You stop fearing automation

With no-code tools, I always hesitated:

“Should I run this now or wait until the 1st of the month when my usage resets?”

With this stack, you can go nuts. Let the AI summarize, publish, clean, post, notify as much as you want. No throttling. No “upgrade now” button.

Where to go from here

So, you’ve got n8n orchestrating flows, MCP running AI prompts, and Docker keeping it all nicely containerized. Feels good, right?

But this setup is just the beginning. Now that you’ve got your automation brain running, here’s how to push it further and make it smarter, faster, weirder.

More ideas for next-level workflows

AI-assisted blogging machine

  • Monitor a folder or Notion page
  • Auto-clean, summarize, or rewrite content
  • Format it into Markdown or HTML
  • Auto-publish to Ghost, Medium, Dev.to, or Hashnode

Email & Slack AI assistant

  • Forward emails to a webhook
  • MCP parses the body, summarizes or translates it
  • n8n routes the result to your team’s Slack

Boom: Instant dev team ghostwriter.

Voice-to-prompt automation

  • Use Whisper or Vosk to transcribe audio
  • Send the transcript to MCP for formatting
  • Auto-generate meeting notes, blog posts, or video summaries

Git hygiene bot

  • Detect sloppy commit messages
  • Use MCP to rewrite them with clarity
  • Push them back via Git CLI

Bonus: add passive-aggressive Slack messages like “Do better, Kyle.”

Stack upgrades to explore

  • n8n queue mode: for heavy task batching
  • Redis + BullMQ: to manage job load
  • Ollama / LM Studio: for running LLMs locally, no API required
  • n8n cloud triggers: so you don’t even need a cron

You’re now in the territory where your machine starts feeling like a junior developer that never sleeps.

And once you start chaining GPT, local models, scripting, and APIs…
you’ll probably automate something so well you forget how to do it manually.
That’s when you know you’ve won.

Helpful links and nerdy resources

If you’re ready to build your own automation stack or improve what you just set up, here’s the good stuff real docs, repos, and tools I personally used (or broke things with).

Core tools

AI tools you can plug in

Docker & DevOps tools

Worth reading

Press enter or click to view image in full size

Top comments (0)