Building an n8n Ollama Automation Workflow: A Practical Tutorial
The n8n Ollama automation workflow combination is quietly becoming the standard for people who want local AI automation without cloud costs or privacy concerns. I've been running this stack for a couple of months, and in this post I'll walk through exactly how to wire them together — including a sample prompt template you can drop in immediately.
Prerequisites
You need two things running before we start:
Ollama — install from ollama.com. Pull a model:
ollama pull llama3.1
n8n — the fastest local start:
npx n8n
Or via Docker if you prefer:
docker run -d --name n8n -p 5678:5678 \
-v ~/.n8n:/home/node/.n8n \
n8nio/n8n
Open n8n at http://localhost:5678. If both are running locally, Ollama is accessible at http://localhost:11434.
Note: If n8n is running in Docker and Ollama is on the host machine, use
http://host.docker.internal:11434on Mac/Windows, or your host's LAN IP on Linux.
Setting Up the Ollama Credential in n8n
- In n8n, go to Settings → Credentials → New
- Search for "Ollama"
- Set the base URL:
http://localhost:11434(or the appropriate address above) - Save. No API key needed.
Test it by adding an Ollama node to a workflow and running a quick prompt. If you get a response, the credential is working.
Anatomy of an n8n Ollama Workflow
A typical n8n Ollama automation workflow has three stages:
- Trigger — what starts the workflow (schedule, webhook, email, file change)
- Processing — data transformation before sending to Ollama
- Ollama call — the AI node, which outputs the model's response
- Action — what to do with the output (save to Notion, send email, log to spreadsheet)
Let's build a real one.
Workflow: Email Digest with AI Summarisation
This workflow fires every morning, fetches recent emails from Gmail, summarises them with Ollama, and sends you a Telegram digest.
Nodes in order:
-
Schedule Trigger — cron:
0 8 * * *(8am daily) - Gmail node — fetch emails from last 24h, filter by inbox
- Code node — format email subjects + snippets into a single string
- Ollama node — send the formatted text with your prompt
- Telegram node — send the response to your Telegram bot
The Prompt Template
This is the prompt I use in the Ollama node. The {{ $json.emailContent }} is a dynamic expression that n8n fills in from the previous node:
You are a personal assistant summarising emails for a busy professional.
Here are today's emails:
---
{{ $json.emailContent }}
---
Instructions:
1. Group emails by urgency: [Urgent], [Normal], [Low/FYI]
2. For each email, write one sentence: what it's about and if any action is needed
3. At the end, write a "Today's priorities" section with max 3 action items
4. Be direct and skip pleasantries. No markdown headers — use plain labels.
Output format:
[URGENT]
- [sender]: [one sentence summary + action needed]
[NORMAL]
- [sender]: [one sentence summary]
[LOW/FYI]
- [sender]: [one sentence summary]
TODAY'S PRIORITIES:
1.
2.
3.
This prompt consistently gives clean, actionable output. The key things that make it work: explicit output format, a clear role definition, and the instruction to skip markdown (which confuses some messaging apps).
Code Node: Formatting Emails for the Prompt
Here's the JavaScript in the Code node that takes n8n's Gmail output and formats it into the emailContent variable:
const emails = $input.all();
const formatted = emails.map(item => {
const { from, subject, snippet } = item.json;
return `From: ${from}\nSubject: ${subject}\nPreview: ${snippet}`;
}).join('\n\n---\n\n');
return [{ json: { emailContent: formatted } }];
Simple, but it prevents the Ollama node from receiving a raw JSON object it can't process.
More Workflow Patterns
Once your n8n Ollama automation workflow is running, the same pattern applies to dozens of use cases:
Document classifier:
- Trigger: new file in Google Drive or local folder
- Ollama prompt: classify document type and extract key metadata
- Action: rename file or add tags in Notion
RSS content filter:
- Trigger: schedule every 4 hours
- Fetch RSS feeds, pass titles/descriptions to Ollama
- Prompt: "Rate relevance to [your topic] 1-10. Return only items rated 7+"
- Action: save relevant items to reading list
Customer support draft:
- Trigger: new ticket in Helpscout/Zendesk webhook
- Ollama prompt: draft a professional response based on the issue described
- Action: create draft reply (human reviews before sending)
Handling Ollama Timeouts
Long prompts or larger models can take 30-60 seconds to respond. n8n's default timeout is 10 seconds, which will cause failures.
Fix this in the HTTP Request node settings (if using the generic HTTP approach) or in n8n's environment config:
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
EXECUTIONS_TIMEOUT=300
EXECUTIONS_TIMEOUT_MAX=600
These are environment variables you can set in your Docker run command or .env file.
Choosing the Right Model for Automation
For n8n Ollama workflows, I don't always use the biggest model. Bigger models are slower and the latency adds up across chained workflows. My recommendations:
-
Structured output tasks (JSON extraction, classification):
phi3orllama3.2:3b— fast and accurate enough -
Writing tasks (drafting, summarising):
llama3.1:8b— better reasoning, worth the extra seconds -
Complex reasoning (multi-step analysis):
llama3.1:32b-q4— use sparingly, only when quality matters
For most automation workflows, speed beats quality. A good enough summary in 3 seconds beats a perfect one in 45 seconds.
Key Takeaways
- n8n Ollama automation workflow = Schedule/Trigger → Data prep → Ollama → Action
- Use a Code node to format inputs before sending to Ollama — don't pass raw JSON
- Explicit prompt templates with defined output formats produce consistent results
- Increase n8n timeout settings if you're running larger models that take >10s
- Match model size to task: small fast models for classification, larger for writing
- The same workflow pattern works for email, documents, RSS, support tickets, and more
If you want my full collection of workflow templates and the complete setup walkthrough, I documented everything in a guide here: The Home AI Agent Blueprint.
Top comments (0)