DEV Community

Cover image for OpenClaw on Your Own Hardware: A Security-First Setup Guide
Haofei Feng
Haofei Feng

Posted on

OpenClaw on Your Own Hardware: A Security-First Setup Guide

When OpenClaw went viral in January,2026 I didn't jump in immediately. I watched from the sidelines for a few weeks, reading the docs, scanning the GitHub issues, and thinking about what it actually should be the way to give an AI agent access to your messaging apps, files, and tools while maintaining privacy and security!

Don't get me wrong — the value proposition is compelling:

  1. Personal Assistant Agent, anytime, anywhere. It integrates with Telegram and Slack natively. I can talk to my assistant from my phone, my laptop, or any device with a messaging app. No special client needed.
  2. Skills and memory without the plumbing. Need it to read a PDF? There's a skill. Fetch a YouTube transcript? A skill. Post to X? A skill. You don't wire up APIs yourself — you install a skill and the agent figures out when to use it.
  3. It remembers you. This is the real draw. Every conversation becomes part of its memory. The more you use it, the better it understands your preferences, your projects, your communication style. It becomes genuinely personal.

An agent that remembers everything about you is powerful, and equally dangerous. Your privacy and security are only as good as the setup behind it. So I didn't install it until I had an architecture I was confident in. This article is that architecture."

Rule of Thumb: Isolation, Isolation, Isolation.

The Setup: Docker on a Mac Mini

I decided to self-host on my Mac Mini (happened to have one before the hype) rather than use a cloud VPS. My data stays on my hardware, on my network, under my control.

I used the official OpenClaw base image and built a custom Docker image on top of it, then installed the tools and dependencies I needed (ffmpeg, Chromium, Python libraries, CLI tools) while keeping full control over what goes in.

FROM openclaw:local
USER root
RUN apt-get update && apt-get install -y --no-install-recommends \
    ffmpeg python3-pip chromium xvfb socat ...
RUN pip3 install --break-system-packages scrapling playwright twikit ...
RUN npm install -g @google/gemini-cli clawhub ...
USER node
Enter fullscreen mode Exit fullscreen mode

Notice the last line. That matters.

Architecture: Where the Security Boundaries Are

Here's how the whole system fits together.

architecture

Security Decisions I Made Along the Way

1. No root access at runtime

The container runs as the node user. Root is only used during the image build to install system packages. At runtime, the agent process has no privilege escalation path. If a prompt injection or malicious skill tries to apt-get install something or modify system files, it fails.

2. Minimal filesystem exposure

The container can only see one folder from my host: a single Google Drive directory used for shared workspace files, skills, and memory. That's it.

volumes:
  - "/Users/me/Google Drive/My Drive/OpenClaw:/home/node/workspace/shared"
Enter fullscreen mode Exit fullscreen mode

No access to my home directory. No access to my SSH keys, browser profiles, documents, or anything else on the Mac. If the agent is compromised, the blast radius is one folder.

3. Secrets stay out of the image

API keys, tokens, and credentials are never baked into the Dockerfile. They're passed as environment variables in docker-compose.yml or stored in Docker named volumes. The image itself can be shared or pushed to a registry without leaking anything.

environment:
  NOTION_API_KEY: "${NOTION_API_KEY}"
  X_AUTH_TOKEN: "${X_AUTH_TOKEN}"
Enter fullscreen mode Exit fullscreen mode

4. Gateway authentication

OpenClaw's gateway, the WebSocket server that Telegram and Slack connect through, is protected by an auth token. Without it, nobody can send commands to the agent or read its responses. This matters because the gateway binds to LAN (not just localhost) to be reachable from Docker's network layer.

5. Channel access controls

Not everyone can talk to the bot:

  • Telegram DMs use a pairing policy. New users must send a pairing code that I manually approve via CLI. No random person can DM my bot and start a conversation.
  • Slack DMs use an allowlist. Only specific user IDs (mine and one colleague's) can direct message the bot.
  • Group chats require an @mention. The bot won't respond to every message in a channel — only when explicitly addressed. This prevents accidental data leakage and reduces the attack surface for prompt injection via other users' messages.

6. Skill vetting

Before installing third-party skills, I installed the openclaw-skill-vetter skill from clawhub. It performs a pre-install security review of skills — checking for suspicious patterns, excessive permissions, or potential prompt injection vectors. Think of it as a code review gate for your agent's capabilities.

clawhub install openclaw-skill-vetter
Enter fullscreen mode Exit fullscreen mode

Not every skill on clawhub is trustworthy. Some request broad file access or shell execution. The vetter helps you make informed decisions before granting those capabilities.

7. SOUL.md behavioural constraints

OpenClaw reads a SOUL.md file that shapes the agent's behaviour. I added explicit constraints:

  • Don't retry failed tools endlessly (prevents runaway API calls)
  • Use Google Search grounding instead of fetching arbitrary URLs (most sites block bots anyway)
  • Browser tool is available but restricted to specific use cases (logins, interactive tasks) — not general browsing

This isn't a security boundary in the traditional sense — it's a guardrail. The agent follows these instructions because the LLM respects them, not because they're enforced at the system level. But combined with the other measures, it reduces the surface area for unintended behaviour.

8. Network and Social Media Account Isolation

The container runs on Docker's bridge network with its own IP, exposing only the ports it needs (18789 for WebSocket, 8080 for OAuth callbacks). Telegram polling is forced to IPv4 to avoid IPv6 routing issues in containers. For social media, I registered dedicated isolated accounts that separate from my personal ones so the agent's access is scoped and any compromise stays contained."

10 Lessons Learned: Running a Private, Secure AI Agent

After weeks of running OpenClaw and reading every security advisory, post-mortem, and researcher disclosure I could find, here's what I'd tell anyone setting this up.

1. One rule above all: Isolation, Isolation, Isolation Container, network, accounts, treat each as its own blast radius. Isolated Docker network, dedicated social media accounts, scoped API keys. If one layer is compromised, nothing else falls with it.

2. Sandbox or don't bother. Never run OpenClaw directly on your host OS. A Docker container/Sandbox is your primary security boundary. If the agent gets tricked into running rm -rf / or exfiltrating files, the blast radius is the container — not your machine. This isn't optional; it's the minimum.

3. Run as non-root, no exceptions. Root inside the container means a prompt injection can install packages, modify system files, or escalate privileges. Build your image with root, then drop to an unprivileged user (USER node) before the agent starts. Every security guide agrees on this one.

4. Mount only what you'd be okay losing. I mount a single Google Drive folder. Not my home directory. Not the Docker socket. Not /tmp. Every mounted path is a path the agent can read, write, or leak. Ask yourself: if this folder was posted publicly tomorrow, would I panic? If yes, don't mount it.

5. Treat every skill like untrusted code. ClawHub is an open marketplace — anyone can publish a skill. Some request shell access, file system writes, or network calls. Install the openclaw-skill-vetter skill to review before installing. Read the SKILL.md yourself. If a skill asks for more than it should need, skip it.

6. Prompt injection is real and unsolvable — plan accordingly. Researchers at Giskard demonstrated that malicious instructions hidden in web pages, emails, or documents can hijack OpenClaw's behaviour. There's no perfect fix. What you can do: restrict tool access, constrain behaviour via SOUL.md, use mention-only in group chats (so other users' messages can't inject), and never give the agent access to credentials it doesn't need.

7. Lock down who can talk to your agent. Use pairing or allowlist for DMs — never open on an internet-facing instance. Require @mention in group chats. Every message your agent processes is a potential injection vector. Fewer inputs = smaller attack surface.

8. Keep secrets out of the image and out of reach. API keys and tokens go in environment variables or Docker named volumes — never in the Dockerfile. Infostealers are already targeting OpenClaw config files specifically. If your image is ever leaked or pushed to a registry accidentally, it should contain zero credentials.

9. Use a separate account for OAuth. I use a dedicated Google account for Gemini CLI auth — not my primary account with 15 years of Gmail, Drive, and Photos. Google has permanently banned accounts for OpenClaw-related ToS violations. Even if you're careful, why risk your main account? The same applies to X, Notion, and any other integration.

10. Accept that "secure enough" is a moving target. Five CVEs were published for OpenClaw in one week in January 2026. Researchers keep finding new prompt injection vectors. The security model of agentic AI is fundamentally different from a chatbot — it acts, not just talks. Stay updated, rotate credentials, audit your skills, and revisit your setup regularly. The threat model evolves faster than your config.

What I'd Still Improve

  • Outbound network filtering. The container has full internet access. Ideally I'd restrict egress to only the APIs it needs (Telegram, Google, X, Notion) and block everything else. A tool like Little Snitch on macOS or iptables rules inside the container could do this.
  • Encrypted memory at rest. The agent's memory lives on Google Drive (encrypted by Google), but I'd prefer encrypting it locally before sync.
  • Credential rotation on a schedule. Right now tokens get rotated when something breaks. It should be proactive.
  • Ephemeral browser profiles. The Chromium instance inside the container should wipe its profile after each session to prevent cookie/session accumulation.

Is It Worth It?

Absolutely. After a few weeks of use, the agent genuinely knows my workflow. It remembers my projects, my preferences, how I like things formatted. It drafts messages in my tone. It checks my X timeline and summarises what matters. It manages Notion pages without me opening the app.

But none of that would matter if I didn't trust the box it runs in. The convenience of an AI assistant that knows everything about you is only valuable if you're confident that knowledge stays where it should.

Take the time to set it up properly. Your future self will thank you.

Top comments (0)