DEV Community

Rob
Rob

Posted on • Originally published at vibescoder.dev

Installing OpenClaw on the Homelab

I've been running Coder workspaces on my homelab for a while — Qwen3.5-35B on llama.cpp, RTX 5090, the whole stack. But the AI assistants were all inside terminal sessions. I wanted something I could message from my phone, from Discord, from anywhere. Something that talks to the local LLM on my own hardware and doesn't phone home to anyone's cloud.

OpenClaw is that thing. It's an open-source personal AI assistant with 367K GitHub stars, a plugin ecosystem, and connectors for every chat platform you can name. The pitch: "Your own personal AI assistant. Any OS. Any Platform."

Here's how I got it running on my Linux workstation, wired to a local Qwen3.5-35B via llama.cpp, talking through Discord. It took an afternoon. It should have taken 30 minutes. The difference was five config mistakes that produced zero useful error messages.

The Hardware

Resource Spec
CPU AMD Ryzen 9 9950X3D — 16 cores / 32 threads
RAM 64 GB
GPU NVIDIA RTX 5090 — 32 GB VRAM
OS Ubuntu 24.04
LLM Qwen3.5-35B-A3B via llama.cpp on port 8080
Embeddings nomic-embed-text-v1.5 via llama.cpp on port 8084

The LLM runs entirely on the GPU. No RAM impact on anything else.

1. Installation: One Curl

curl -fsSL https://openclaw.ai/install.sh | bash
Enter fullscreen mode Exit fullscreen mode

That's it. The script detects Ubuntu, installs Node if needed, drops the openclaw binary, and launches an onboarding wizard. The whole thing took about 90 seconds.

2. Pointing at the Local LLM

The wizard asks for a model provider. The list has Anthropic, Google, OpenAI, and two dozen cloud services. Scroll past all of them and pick Custom Provider.

OpenClaw wizard showing the model/auth provider selection screen

The wizard needs three things:

  • Base URL: http://localhost:8080/v1
  • API key: Anything — llama-server doesn't check it, but the field can't be empty
  • Model ID: It auto-detects from the /v1/models endpoint

I had two llama-server instances running and had to figure out which was which:

curl -s http://localhost:8080/v1/models | python3 -c "import sys,json; [print(m['id']) for m in json.load(sys.stdin)['data']]"
# Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf

curl -s http://localhost:8084/v1/models | python3 -c "import sys,json; [print(m['id']) for m in json.load(sys.stdin)['data']]"
# nomic-embed-text-v1.5.f16.gguf
Enter fullscreen mode Exit fullscreen mode

Port 8080 is the chat model. Port 8084 is embeddings. OpenClaw wants the chat model.

The wizard verified the connection and asked for an Endpoint ID — just a label for the config. I accepted the default custom-localhost-8080.

OpenClaw wizard showing the endpoint configuration

Use localhost, not your Tailscale IP. OpenClaw runs on the same machine as llama-server. Routing through Tailscale adds latency and creates a dependency on the Tailscale daemon being up for purely local traffic.

3. Setting Up the Discord Bot

The wizard asks which chat channel to connect. I picked Discord — it's the most popular OpenClaw channel, which means the most community support and troubleshooting threads.

Creating the Discord bot takes five steps in the Developer Portal:

Step 1: Create the application. Click "Build a Bot" on the welcome screen, then "New Application." I named mine OpenClaw.

Discord Developer Portal welcome screen

Step 2: Get the bot token. Go to the Bot tab, click "Reset Token," copy the token. Paste it into the OpenClaw wizard when prompted.

Step 3: Enable Message Content Intent. Same Bot tab, scroll to "Privileged Gateway Intents," toggle on Message Content Intent. Without this, the bot can see that messages exist but can't read what they say.

Step 4: Invite the bot to your server. The OAuth2 URL Generator in the Developer Portal can be finicky. I skipped it and built the invite URL manually:

https://discord.com/oauth2/authorize?client_id=YOUR_APP_ID&scope=bot&permissions=66560
Enter fullscreen mode Exit fullscreen mode

Permission 66560 grants Send Messages + Read Message History. Replace YOUR_APP_ID with the Application ID from the General Information tab.

Discord OAuth2 page showing scopes selection

Step 5: Create a server. I didn't have a Discord server. The invite page showed "No items to show." Had to go back to Discord, click the + button in the sidebar, create a new server called HomeLabOpenClaw, then revisit the invite URL.

Discord bot invite page showing no servers

4. Finishing the Wizard

Back in the terminal, the wizard asked a few more questions:

  • Channel access: I picked "Open (allow all channels)" — it's my personal server, no reason to maintain an allowlist
  • Search provider: DuckDuckGo — free, no API key, good enough for a first run
  • Skills: Said yes, let it enable the 10 eligible ones
  • Hooks: Skipped — not essential for getting started
  • Hatch: "Hatch in Terminal" — starts the gateway right there so you can see the logs

OpenClaw wizard hatch screen

The gateway started, the Discord plugin connected, and the bot appeared online in my server.

5. The Pairing Dance

I messaged the bot and got: "OpenClaw: access not configured." With a pairing code.

Discord DM showing pairing code from carrybot

OpenClaw's DM policy defaults to pairing — unknown senders get a code instead of a response. You approve them from the terminal:

openclaw pairing approve discord YOUR_PAIRING_CODE
Enter fullscreen mode Exit fullscreen mode

After that, DMs worked perfectly. The bot responded, the 5090 spun up, responses came back. Great.

Then I tried a server channel and everything broke.

6. The Silent Channel Problem

For the next two hours, this was my experience: I'd @carrybot in a server channel, the bot would react with an emoji, show "typing..." for a few seconds, and then... nothing. No response. No error in Discord. The 5090 was clearly working — I could hear the fans.

Discord channel showing @carrybot messages with no responses

DMs worked. Channels didn't. Here's every wrong turn I took and the actual fix.

Wrong Turn 1: "It's a permissions issue"

I checked the bot's Discord role permissions. Almost nothing was toggled on. I enabled Send Messages, Read Message History, View Channels. Restarted the gateway. Still nothing.

Verdict: The permissions were wrong and needed fixing, but they weren't the root cause. The bot was already generating responses — it just wasn't posting them.

Wrong Turn 2: "It's a context window issue"

The bot occasionally showed this error:

Context limit exceeded error in Discord

The OpenClaw wizard had set contextWindow: 4000 and maxTokens: 4096 in the model config. My llama-server has a 131K context window. The wizard didn't auto-detect this from the Custom Provider endpoint.

I edited ~/.openclaw/openclaw.json and changed:

{
  "contextWindow": 131072,
  "maxTokens": 81920,
  "reasoning": true
}
Enter fullscreen mode Exit fullscreen mode
  • contextWindow: 131072 matches llama-server's --ctx-size 131072
  • maxTokens: 81920 matches llama-server's -n 81920 (max output tokens)
  • reasoning: true because Qwen3.5 runs with --reasoning-budget 8192

This fixed the context errors, but channels still didn't work.

Wrong Turn 3: "It's the memory plugin"

The logs showed tool:memory_search:started hanging indefinitely. Qwen3.5 kept trying to call a memory_search tool before responding, and it never completed.

openclaw config set plugins.entries.memory-core.enabled false
openclaw gateway restart
Enter fullscreen mode Exit fullscreen mode

This fixed the tool-call hangs in DMs. Channels still didn't work.

Wrong Turn 4: "It's a mention detection issue"

Early on, I was typing @OpenClaw in channels. The logs showed reason: "no-mention" — the bot is mention-gated in group chats and I was mentioning the wrong name. The Discord application is "OpenClaw" but the bot username is "carrybot" (I renamed it in the Developer Portal).

You have to use the actual Discord mention — type @ and select the bot from the autocomplete. Typing @carrybot as plain text doesn't create a real mention.

This got the bot to actually process channel messages. But it still wasn't responding.

The Actual Fix: visibleReplies

After two hours, I found it. During the wizard's openclaw doctor step, it had auto-applied a config change:

"messages": {
  "groupChat": {
    "visibleReplies": "message_tool"
  }
}
Enter fullscreen mode Exit fullscreen mode

This tells OpenClaw to use the message tool for posting replies in group chats / server channels. But the message tool wasn't available — I'd disabled memory-core and the tool policy didn't include it. So the bot would generate a perfect response, try to send it via a tool that doesn't exist, and silently fail.

The fix:

openclaw config set messages.groupChat.visibleReplies "automatic"
openclaw gateway restart
Enter fullscreen mode Exit fullscreen mode

One config key. Two hours of debugging. Zero error messages in the logs.

7. The Working Config

Here's the final ~/.openclaw/openclaw.json model section that actually works:

{
  "models": {
    "providers": {
      "qwen-local": {
        "baseUrl": "http://localhost:8080/v1",
        "api": "openai-completions",
        "apiKey": "sk-none",
        "models": [{
          "id": "Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf",
          "contextWindow": 131072,
          "maxTokens": 81920,
          "reasoning": true
        }]
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

And the critical non-obvious settings:

{
  "messages": {
    "groupChat": {
      "visibleReplies": "automatic"
    }
  },
  "plugins": {
    "entries": {
      "memory-core": { "enabled": false }
    }
  },
  "agents": {
    "defaults": {
      "compaction": {
        "reserveTokensFloor": 40000
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

8. Making It Stick

Install the systemd service so the gateway survives reboots:

openclaw gateway install
Enter fullscreen mode Exit fullscreen mode

Set yourself as the command owner so you can run privileged commands:

openclaw config set commands.ownerAllowFrom '["discord:YOUR_DISCORD_USER_ID"]'
Enter fullscreen mode Exit fullscreen mode

Verify everything:

openclaw --version          # confirm CLI
openclaw doctor             # check for config issues
openclaw gateway status     # verify gateway is running
Enter fullscreen mode Exit fullscreen mode

What I Learned

The wizard's defaults are for cloud providers, not local LLMs. contextWindow: 4000 is a safe default for API providers that charge per token. It's a crippling default for a local model with 131K context. If you're running a Custom Provider, you must manually set contextWindow and maxTokens to match your server's actual limits.

visibleReplies: "message_tool" is a trap. The doctor command auto-applies this "recommended" setting, but it depends on the message tool being available. If you're running a stripped-down config without all the default tools, your bot will silently swallow every group chat reply. The symptom is perfect — the bot reacts, types, generates a response (you can verify in the session files), and then just... doesn't post it. No error. No log line. Nothing.

Discord bot setup has more steps than it should. Between the Developer Portal, the OAuth2 scopes, the Privileged Gateway Intents, the server creation, the role permissions, and the correct mention format — there are at least six places where a single missed toggle produces a silent failure. Document every step. Check every toggle.

Session files are your debugging lifeline. When the logs show nothing, check ~/.openclaw/agents/main/sessions/*.jsonl. The session file showed me the bot was generating perfect responses that were never delivered. Without that, I would have assumed the LLM was broken.

Start with DMs, graduate to channels. DMs have a simpler code path — no mention detection, no group chat reply policy, no channel permissions. Get DMs working first, then debug channels as a separate problem.


Files Changed

On the workstation:

  • ~/.openclaw/openclaw.json — model config, context window, reply policy, plugin settings, owner config

Discord:

  • Created Discord application "OpenClaw" with bot user "carrybot"
  • Created Discord server "HomeLabOpenClaw"
  • Enabled Message Content Intent, configured role permissions

Systemd:

  • openclaw-gateway.service — installed via openclaw gateway install

What's Next

The bot works, but it's running Qwen3.5-35B with memory-core disabled and no skills beyond the basics. Next steps:

  1. Re-enable memory. Figure out why memory_search hangs with Qwen3.5's tool call format and fix it — memory is one of OpenClaw's killer features.
  2. Add skills. 43 skills were blocked by missing requirements. Install the useful ones — session-logs, nano-pdf, video-frames.
  3. Try a different local model. Qwen3.5 works but its tool calling may not be fully compatible with OpenClaw's expected format. Worth testing Gemma 4 or another model with native tool support.
  4. Wire up Tailscale access. The gateway listens on localhost:18789. Exposing it on the tailnet means I can hit the dashboard from any device without a Cloudflare tunnel.

By the Numbers

  • 1 curl command to install OpenClaw
  • 131,072 tokens — the context window the wizard set to 4,000
  • 81,920 tokens — max output, matching llama-server's -n flag
  • 2 hours debugging silent channel failures
  • 1 config key (visibleReplies: "automatic") that fixed everything
  • 6 Discord setup steps where a missed toggle means silent failure
  • 0 cloud dependencies — fully local LLM, self-hosted gateway
  • ~500 MB RAM footprint for the OpenClaw gateway (Node.js process)
  • 18 screenshots taken during the debug session
  • 4 sensitive screenshots deleted (contained tokens/credentials)
  • 0 useful error messages for the visibleReplies bug

Top comments (0)