DEV Community

Paarthurnax
Paarthurnax

Posted on

I Built a Personal AI Agent Setup in an Afternoon — Here's the 2025 Guide

I Built a Personal AI Agent Setup in an Afternoon — Here's the 2025 Guide

A beginner-friendly walkthrough of setting up your own local AI agent without cloud subscriptions or technical headaches.


I kept putting it off. Every time I read about personal AI agent setups, the articles assumed you were a developer with a homelab and strong opinions about Docker networking. I'm not. I'm someone who uses a computer a lot and was tired of paying for three different AI subscriptions while still copy-pasting things manually.

Last month I finally sat down on a Saturday afternoon and figured it out. By dinner, I had a working personal AI agent setup for 2025 that runs locally, costs nothing per query, and actually does things — not just answers questions. Here's exactly what I did, without the jargon.


What "Personal AI Agent" Actually Means

Before I get into the setup, let me clear something up. A lot of people use "AI agent" to mean a chatbot. That's not what I mean.

A personal AI agent setup in 2025 context means: software that can take actions on your behalf, triggered automatically or on demand, using a local AI model as its brain. It reads emails, summarises documents, drafts replies, organises files — whatever you train it to do. No human in the loop unless you want one.

The three pieces you need:

  1. A local AI model — the "brain" (I use Ollama)
  2. An automation layer — the "hands" (I use n8n)
  3. A trigger system — the "senses" (webhooks, email, schedules)

Step 1: Install Ollama (20 Minutes)

Ollama is the easiest way to run AI models locally. It handles everything: downloading models, serving them via API, memory management.

Go to ollama.com and download for your OS. Mac and Windows both have installers. Run it, done.

Then open a terminal and pull a model:

ollama pull llama3.1
Enter fullscreen mode Exit fullscreen mode

This downloads about 4.7GB. Once it's done, test it:

ollama run llama3.1 "What's the capital of France?"
Enter fullscreen mode Exit fullscreen mode

If it answers, your local AI is working. That's your brain installed.

For a personal AI agent setup, I recommend Llama 3.1 8B if you have 16GB RAM, or Phi-3 Mini if you're on 8GB. Both are genuinely capable for everyday tasks.


Step 2: Install n8n (20 Minutes)

n8n is where the magic happens. It's a visual workflow builder — think Zapier, but self-hosted and free to run locally.

If you have Node.js installed:

npx n8n
Enter fullscreen mode Exit fullscreen mode

Or with Docker:

docker run -it --rm --name n8n -p 5678:5678 n8nio/n8n
Enter fullscreen mode Exit fullscreen mode

Open http://localhost:5678 in your browser. You'll see a visual canvas where you drag and drop workflow blocks.


Step 3: Connect Them Together (15 Minutes)

In n8n, create a new workflow. Add an "Ollama" node (it's built in). Set the base URL to http://localhost:11434 and pick your model.

Now you have a workflow block that can send any text to your local AI and get a response back. Connect that to anything — Gmail, Notion, a file folder, a webhook — and you have an agent.

My first workflow was embarrassingly simple: every morning at 8am, fetch the top 10 headlines from my RSS feeds, pass them to Ollama to summarise in 3 bullet points each, and send the result to my phone via Telegram. Setup time: about 25 minutes including figuring out the Telegram bot setup.


Step 4: Build Your First Real Agent Workflow

Once the basic connection works, the interesting workflows take shape fast. A personal AI agent setup becomes genuinely useful when you connect it to things you already do:

Email triage: Gmail trigger → Ollama classify urgency → label in Gmail or forward to Notion
Document summariser: Drop a PDF in a folder → n8n detects it → Ollama summarises → saves summary alongside original
Meeting prep: Calendar event detected → Ollama pulls context from notes → sends briefing to your phone

None of these require coding. They're just visual workflows connecting services together.


What Hardware Do You Need?

For a personal AI agent setup that runs 24/7, you ideally want a dedicated machine. Options:

  • Old laptop/desktop: Works fine if it has 16GB RAM
  • Mac Mini M2: The best option — low power, fast, silent
  • Raspberry Pi 5 with 8GB: Limited to smaller models but totally functional
  • Your main computer: Fine to start, just leave it on

I run mine on a Mac Mini M2 and it handles everything I throw at it without slowing down my main workflow.


The Part Nobody Tells You

The hardest part isn't the technical setup — it's knowing what to automate. Spend 15 minutes writing down the repetitive things you do every week that involve reading, writing, or organising. That list is your roadmap.

My list had: morning news summary, long email triage, drafting responses to common questions, weekly expense categorisation, and saving interesting links with AI-generated summaries. I've automated four of those five. It saves me maybe 45 minutes a day.

The setup itself really does take an afternoon. The value compounds over months.


Key Takeaways

  • Personal AI agent setup in 2025 = local model (Ollama) + automation layer (n8n) + triggers
  • Ollama + n8n is the fastest beginner path — no coding required
  • Start with one workflow and expand from there; don't try to build everything at once
  • 16GB RAM minimum recommended for comfortable performance with 8B models
  • The bottleneck is your automation ideas, not the technology — write down what you'd automate first
  • Hardware cost is one-time; running cost after setup is essentially zero

I documented the full setup — including all my workflow templates and model recommendations — in a guide here: The Home AI Agent Blueprint.

Top comments (0)