OpenClaw is an open-source agent that is free to download and install. However, using it can burn through tokens at an alarming rate.
The costs don’t just come from core model responses; they also stem from web reading, memory retrieval, summarization, tool calls, and the workspace files and bootstrap configurations packed into system prompts. As the context length grows, the bill hits you like a sudden one-two punch to the wallet.
Running OpenClaw with Claude Sonnet—accumulating 10 million input and 10 million output tokens a month—can easily cost nearly a hundred dollars. If you truly use it as a 24/7 Agent to run high-difficulty tasks on advanced models, it wouldn't be surprising to burn through thousands. For instance, OpenRouter's weekly processed token volume recently jumped from 6.4 trillion to 13 trillion.
You wanted AI to work for you, but it turns out you're just handing your entire paycheck over to the AI.
Since cloud token expenses are so high, local execution is an excellent alternative. This is where OpenClaw and Ollama work best together.
Ollama is responsible for running open-source models like Llama, Mistral, or DeepSeek on your local machine, bringing API costs down to zero. Other than your GPU fans spinning a bit faster, there are no downsides. Plus, all private data and code remain local, ensuring privacy and security while keeping costs under control.
Practical Guide: Combining OpenClaw and Ollama Locally
OpenClaw is a framework developed in Node.js and requires an environment running Node.js 22 or higher.
To deploy the Node.js environment, you can use ServBay.
As a local web development environment manager, ServBay can handle different versions of Node.js. Through its graphical interface, users can quickly switch to a Node.js 22 environment, avoiding the hassle of manual environment variable configuration or version conflicts.
Once the environment is ready, OpenClaw can be deployed with simple commands:
curl -fsSL https://molt.bot/install.sh | bash
openclaw onboard --install-daemon
Again, through ServBay, you can download and install Ollama with one click.
Then, simply select and download the appropriate LLM from the menu on the left side of ServBay.
OpenClaw does not have built-in "reasoning" capabilities; it needs to be linked to Ollama using the following command:
ollama launch openclaw
This command configures OpenClaw to use the models provided by your local Ollama instance.
Security Defense: Git Safeguards and Permission Control
When an AI is granted system-level permissions, security risks escalate. You’ve likely seen news reports about OpenClaw accidentally deleting emails.
An agent with execution permissions can cause serious damage if it misunderstands an instruction. To counter these potential risks, we must establish solid defense mechanisms.
Git as a Safety Net
OpenClaw recommends bringing the entire workspace—including configuration files and memory logs—under Git management.
git init
git add AGENTS.md SOUL.md memory/
git commit -m "Initialize agent workspace"
If the agent installs a wrong skill or makes abnormal changes to configuration files during a task, developers can use git revert to quickly roll back the system state to a safe point. This version-controlled evolution makes AI behavior transparent and reversible.
Permission Limits and Sandbox Mode
An agent’s power comes from its skill system. To prevent third-party skills from carrying malicious code, you should manually audit the source code before installation to confirm what commands are being executed. Additionally, for agents handling complex tasks, it is recommended to run them in isolated environments like virtual machines or Docker containers.
Authentication and Private Access
The Gateway service should never be directly exposed to the public internet. The secure practice is to enable gateway authentication and perform risk diagnostics using openclaw doctor. For remote access, use a VPN or internal tunneling tools to ensure only authorized users can send commands to the agent.
Summary
OpenClaw is a great project, and it works well as a "toy." However, if you truly want it to be a 24/7 employee, both the costs and the risks remain quite high.




Top comments (0)