DEV Community

Runix
Runix

Posted on

How to Self-Host an AI Customer Chat Agent with Ollama + ChatterMate (No API Keys Needed)

So you want an AI-powered customer chat widget on your site — but you don't want to send every conversation to OpenAI or pay per token?

ChatterMate is an open-source AI chat agent that works seamlessly with Ollama, meaning you can run the entire stack locally. No API keys. No cloud dependency. Full data ownership.

In this post, I'll walk through getting ChatterMate running with Ollama on your own server.

What you'll need:

  • A Linux server (or Mac/Windows with Docker)
  • Docker and Docker Compose
  • About 8GB RAM for running a decent local model

Step 1: Clone the repo

git clone https://github.com/chattermate/chattermate.chat.git
cd chattermate.chat

Step 2: Set up Ollama

Install Ollama and pull a model:

curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2

Step 3: Configure and run

Copy the example environment file and configure it to point to your local Ollama instance. Then spin everything up with Docker Compose.

What's in v1.0.9 (latest release):

  • Slack integration with OAuth
  • Secure token-based authentication for widgets
  • XSS attack prevention and security hardening
  • Database connection pooling improvements
  • Contributions from external developer @cdmx-inWhy self-host your AI chat?

Complete data privacy - conversations never leave your server.

No per-token costs - run as many conversations as your hardware allows.

Customize everything - the AI agent's personality, knowledge base, and behavior are all yours to tune.

AGPL licensed - truly open source.

Current stats: 58 stars, 26 forks on GitHub, 4 contributors, 586 commits, 10 releases.

Get started: https://github.com/chattermate/chattermate.chat

Star the repo if this is useful - we're a small team and every star helps us keep building in the open. We'd also love PRs, issues, and feedback from the community.

Top comments (0)