DEV Community

Hieu Pham
Hieu Pham

Posted on

How to Run NemoClaw with a Local LLM & Connect to Telegram (Without Losing Your Mind)

I just spent a full day wrestling with NemoClaw so you don’t have to.

NemoClaw is an incredible agentic framework, but because it is still in beta, it has its fair share of quirks, undocumented networking hurdles, and strict kernel-level sandboxing that will block your local connections by default.

My goal was to run a fully private, locally hosted AI agent using a local LLM that I could text from my phone via Telegram. Working with an RTX 4080 and its strict 16GB VRAM limit meant I had to optimize my model choice and bypass a maze of container networks to get everything talking.

If you are trying to ditch the cloud and run OpenClaw locally on WSL2, here is the exact step-by-step fix to get your agent online.


Part 1: Escaping the Sandbox (Connecting the Local LLM)

By default, NemoClaw runs your agent inside a nested Kubernetes (k3s) container within WSL2. If you try to point it to your local Ollama instance using localhost or the default Docker bridge, the sandbox's strict egress policies will hit you with an endless stream of HTTP 503 errors.

To fix this, we have to route the traffic out the "front door" via your primary WSL network interface.

1. Force Ollama to Listen on All Interfaces

Stop your background Ollama service and force it to broadcast to your local WSL network:

sudo systemctl stop ollama
OLLAMA_HOST=0.0.0.0 ollama serve
Enter fullscreen mode Exit fullscreen mode

2. Grab Your Primary WSL IP

In a new terminal tab (outside the OpenShell sandbox), grab your virtual machine's IP:

export WSL_IP=$(ip -4 addr show eth0 | grep -Po 'inet \K[\d.]+')
Enter fullscreen mode Exit fullscreen mode

3. Wire the OpenShell Gateway

Delete the broken default provider and recreate it pointing to your WSL IP, then set your inference route (I used qwen3.5:9b as it comfortably fits my hardware constraints):

openshell provider delete ollama

openshell provider create \
  --name ollama \
  --type openai \
  --credential OPENAI_API_KEY=empty \
  --config OPENAI_BASE_URL=http://$WSL_IP:11434/v1

openshell inference set --provider ollama --model qwen3.5:9b --no-verify
Enter fullscreen mode Exit fullscreen mode

4. Clear the Stale Locks

If your agent crashed during setup, clear the locked session files inside the sandbox or your next prompt will timeout:

rm /sandbox/.openclaw-data/agents/main/sessions/*.lock
Enter fullscreen mode Exit fullscreen mode

Part 2: The Telegram Integration

NemoClaw has a built-in Telegram bridge, but attempting to run it with the default Nemotron cloud model is notoriously unstable. I found that the connection repeatedly gets dropped.

Switching the "brain" over to the local LLM we just configured fixes this pipeline entirely.

1. Get Your Token

Message the BotFather on Telegram to create a new bot and grab your HTTP API token.

2. Export and Start

On your host WSL terminal (not inside the sandbox), pass the token to the service manager:

export TELEGRAM_BOT_TOKEN="your_token_here"
nemoclaw start
Enter fullscreen mode Exit fullscreen mode

💡 Troubleshooting Tip: If nemoclaw status says the bridge is running but it keeps crashing, you likely have a stale PID file. Run kill -9 <PID> to clear the zombie process, run nemoclaw stop, and try starting it again.


The Result

Once that bridge is live, you have a completely private, localized AI agent running on your own GPU that you can text from anywhere in the world.

⚠️ Wait, is your agent ghosting you or trapped in a loop?

If you got the bridge running successfully, but your AI is taking forever to respond, spitting out raw JSON, or stuck in an infinite retry loop, you have likely hit the hidden context window trap.

I wrote a complete follow-up guide on exactly why this happens. Check out Why Your Custom NemoClaw LLM Takes Forever to Respond (Or Completely Ignores You) to learn how to permanently fix the truncation error by building a custom Modelfile.

Have you experimented with NemoClaw or OpenShell yet? Let me know in the comments if you've hit any other weird WSL networking snags!

Top comments (0)