Ollama 0.17 just shipped native OpenClaw integration with web search out of the box. Two commands and you have a personal AI agent running on your machine with local models.
This is great for adoption. It's terrifying for security.
What Ollama 0.17 Does
Ollama's latest release lets you set up OpenClaw with open models (Llama, Mistral, DeepSeek) and web search. No cloud API keys. Fully local inference.
ollama launch openclaw
One command. You now have an AI agent that can send emails, manage your calendar, read/write files, execute shell commands, search the web, and connect to WhatsApp/Telegram/iMessage.
All running with your user permissions on your actual machine.
Why This Is a Security Problem
⚠️ The Ollama + OpenClaw combo inherits every OpenClaw vulnerability. Local models don't fix host-level security.
Running local models solves one problem (data stays local) but creates a false sense of security:
1. Your Entire Filesystem Is Exposed
The agent runs as your user. It can read ~/.ssh, ~/.aws, browser cookies, crypto wallets, tax documents — everything.
2. The WebSocket Hijack Still Works
Oasis Security proved any website can brute-force the gateway port and take full control. Local models don't change this.
3. Prompt Injection via Web Search
Ollama 0.17 adds web search. The agent fetches content from the internet and processes it. A malicious webpage can embed prompt injection that hijacks the agent. Your "local" agent is now executing attacker instructions.
4. Skill Supply Chain
341+ malicious skills documented. A compromised skill runs with full access to your system.
5. No Permission Boundaries
OpenClaw has no concept of "read files but not execute commands" or "access calendar but not SSH keys." All-or-nothing.
The One-Click Problem
When something is easy to install, people don't think about security. Ollama's user base is developers who want local AI — not enterprise security teams. They'll run ollama launch openclaw, connect it to WhatsApp, and forget about it.
Microsoft: "OpenClaw should be treated as untrusted code execution with persistent credentials. It is not appropriate to run on a standard personal or enterprise workstation."
Now Ollama is making it trivial to do exactly what Microsoft says not to do.
How to Secure It
npm install -g clawmoat
1. Permission Tiers
const { HostGuardian } = require('clawmoat');
const guardian = new HostGuardian({
mode: 'standard',
workspace: '~/openclaw-workspace',
forbiddenZones: ['~/.ssh', '~/.aws', '~/.gnupg'],
});
2. Network Monitoring
const { NetworkEgressLogger } = require('clawmoat');
const logger = new NetworkEgressLogger();
// Blocks cloud metadata, private IPs, known-bad domains
3. Skill Auditing
npx clawmoat skill-audit ~/.openclaw/skills/
4. WebSocket Hijack Detection
const { GatewayMonitor } = require('clawmoat');
const monitor = new GatewayMonitor();
// Detects brute-force port scanning, suspicious WS origins
5. Financial Data Protection
const { FinanceGuard } = require('clawmoat');
const guard = new FinanceGuard();
// Blocks crypto wallets, banking files, tax documents
The Bottom Line
Ollama 0.17 is going to put OpenClaw on thousands of new machines. Most won't have any security layer between the agent and the host.
If you're going to run OpenClaw — with Ollama or otherwise — run it with a security moat.
- ⭐ GitHub
- 📖 Landing page
- 💰 Finance module
Open-source (MIT), zero dependencies, 277 tests passing.
Top comments (0)