DEV Community

Cover image for Ollama + Openclaw = Free AI Agent
David Shibley
David Shibley

Posted on

Ollama + Openclaw = Free AI Agent

Using OpenClaw with Ollama

A Practical Setup and Usage Guide

Overview

OpenClaw is an open-source agent framework designed to automate tasks by
allowing large language models to interact with tools, APIs, and local
environments. When paired with Ollama, you can run these agents fully
locally using open-source models instead of relying on cloud APIs.

This combination enables:

  • Local AI agents with tool access
  • Privacy-preserving automation
  • Offline experimentation with LLM workflows
  • Lower operational costs compared to hosted models

Typical use cases include coding agents, data automation, system
assistants, and research tools.


Architecture

The basic architecture when using OpenClaw with Ollama looks like this:

User
  │
  ▼
OpenClaw Agent
  │
  ▼
Ollama API (localhost:11434)
  │
  ▼
Local LLM Model
Enter fullscreen mode Exit fullscreen mode

OpenClaw sends prompts to Ollama's API endpoint, which runs a local
model and returns responses.


Prerequisites

Before using OpenClaw with Ollama, ensure the following are installed.

Hardware

Recommended minimum:

  Component   Recommendation
  ----------- ----------------------
  RAM         16 GB (32 GB ideal)
  CPU         Modern multi-core
  GPU         Optional but helpful
  Storage     20--50 GB for models
Enter fullscreen mode Exit fullscreen mode

Software

1. Python

Python 3.10+

Verify:

python --version
Enter fullscreen mode Exit fullscreen mode

2. Ollama

Install Ollama from:

https://ollama.ai

Run the service:

ollama serve
Enter fullscreen mode Exit fullscreen mode

Pull a model:

ollama pull qwen3:8b
Enter fullscreen mode Exit fullscreen mode

Other recommended models:

  • mistral
  • codellama
  • phi
  • deepseek-coder

3. Git

Required to clone the OpenClaw repository.

git --version
Enter fullscreen mode Exit fullscreen mode

4. Virtual Environment (Recommended)

Create a Python environment:

python -m venv venv
source venv/bin/activate
Enter fullscreen mode Exit fullscreen mode

Windows:

venv\Scripts\activate
Enter fullscreen mode Exit fullscreen mode

Installing OpenClaw

Clone the repository:

git clone https://github.com/<openclaw-repo>/openclaw.git
cd openclaw
Enter fullscreen mode Exit fullscreen mode

Install dependencies:

pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

(or pip3)


Configuring OpenClaw to Use Ollama

Ollama runs at:

http://localhost:11434
Enter fullscreen mode Exit fullscreen mode

Example configuration:

MODEL_PROVIDER = "qwen"
MODEL_NAME = "qwen3:8b"
OLLAMA_BASE_URL = "http://localhost:11434"
Enter fullscreen mode Exit fullscreen mode

Example request payload:

{
  "model": "qwen3:8b",
  "prompt": "Explain recursion simply.",
  "stream": false
}
Enter fullscreen mode Exit fullscreen mode

Verifying the Setup

Test Ollama first:

ollama run qwen3:8b
Enter fullscreen mode Exit fullscreen mode

Example prompt:

Explain how neural networks work.
Enter fullscreen mode Exit fullscreen mode

Then test from OpenClaw by running an agent task.

ollama launch openclaw
Enter fullscreen mode Exit fullscreen mode

Example Agent Workflow

A typical OpenClaw agent cycle:

  1. Receive task
  2. Send prompt to model
  3. Model chooses a tool or action
  4. Execute tool
  5. Feed results back to model
  6. Repeat until complete

Example:

User Task:
"Find the latest AI news and summarize it."
Enter fullscreen mode Exit fullscreen mode

Example Use Cases

1. Local Coding Assistant

Recommended models:

  • deepseek-coder
  • codellama

Example prompt:

Create a Python script that renames files based on date.
Enter fullscreen mode Exit fullscreen mode

2. Personal Automation Agent

Examples:

  • Organize files
  • Manage downloads
  • Process documents
  • Summarize PDFs

Example workflow:

Input:
Summarize all PDFs in /research
Enter fullscreen mode Exit fullscreen mode

3. Research Assistant

The agent can:

  • scrape web pages
  • summarize research
  • compare sources
  • generate reports

Example prompt:

Compare open-source LLMs released in the last year.
Enter fullscreen mode Exit fullscreen mode

4. Data Analysis

Example:

Analyze this CSV and explain key trends.
Enter fullscreen mode Exit fullscreen mode

Agent actions:

  1. Load dataset
  2. Run Python analysis
  3. Generate summary

5. System Administration Assistant

Example:

Analyze the last 1000 lines of system logs and find errors.
Enter fullscreen mode Exit fullscreen mode

Example Python Integration

import requests

url = "http://localhost:11434/api/generate"

payload = {
    "model": "qwen3:8b",
    "prompt": "Explain how transformers work",
    "stream": False
}

response = requests.post(url, json=payload)

print(response.json()["response"])
Enter fullscreen mode Exit fullscreen mode

Performance Tips

Choose the Right Model

Task Recommended Model


Coding deepseek-coder
General reasoning qwen3
Fast responses mistral
Lightweight systems phi


Use Quantized Models

Example:

qwen3:8b
Enter fullscreen mode Exit fullscreen mode

Benefits:

  • faster inference
  • lower RAM usage

Enable Streaming

Streaming responses reduce latency for long outputs.


Security Considerations

Recommendations:

  • restrict file system access
  • sandbox tool execution
  • review auto-execution features
  • avoid exposing the Ollama API externally

Troubleshooting

Ollama Not Running

Error:

connection refused localhost:11434
Enter fullscreen mode Exit fullscreen mode

Fix:

ollama serve
Enter fullscreen mode Exit fullscreen mode

Model Not Found

Error:

model not found
Enter fullscreen mode Exit fullscreen mode

Fix:

ollama pull qwen3:8b (or whatever model you are using)
Enter fullscreen mode Exit fullscreen mode

Slow Performance

Possible causes:

  • insufficient RAM
  • model too large
  • CPU-only inference

Solutions:

  • use smaller models
  • enable GPU acceleration
  • use quantized models

Advanced Features

Tool Creation

OpenClaw allows custom tools such as:

  • web search
  • database queries
  • file system access
  • shell commands
  • APIs

Multi-Agent Systems

Example roles:

  • researcher
  • coder
  • reviewer
  • executor

Memory Systems

Agents can maintain persistent memory such as:

  • previous tasks
  • learned preferences
  • stored documents

Conclusion

Combining OpenClaw with Ollama creates a powerful platform for running
autonomous AI agents locally. With the right models and tools, it
enables everything from coding assistants to research automation without
relying on external APIs.
Please feel free to leave questions in the comments.

Top comments (0)