DEV Community

Cover image for Turn Your Laptop Into an AI Agent (Free OpenClaw + Telegram Setup)
Julien Avezou
Julien Avezou Subscriber

Posted on

Turn Your Laptop Into an AI Agent (Free OpenClaw + Telegram Setup)

I have been following the developments of Openclaw and hadn't properly played around with it until now. The FOMO was too strong so I decided to experiment with it in a very low stakes context and share this experiment with you.

No APIs. No subscriptions. Just my machine.


Setting the scene

For those living under a rock these past months, let's start with defining what Openclaw is.
Only kidding, if you don't know what Openclaw is that's totally fine, things move fast these days and it's hard to keep up with everything. If you already know what Openclaw is, here is a refresher:

OpenClaw is a local AI agent runtime that connects LLMs to real-world tools, channels (like Telegram), and automation, letting you build autonomous, personalized AI systems that run on your own machine.

Openclaw is seen as revolutionary because it turns an LLM into a programmable system, not just a chatbot through agents, tool integrations, automations, native channels etc.

Now that we understand what we are working with, let's set some goals, a target architecture and structure for this tutorial.

My goals for this experiment:

  • Become familiar with the basics of OpenClaw ✅
  • Free setup for low stakes experimentation ✅
  • Local models only ✅
  • Isolation via VM for security ✅
  • Share this experiment in public to share knowledge ✅
  • Have fun and learn something new ✅

The flow for this tutorial:

User in Telegram
   ↓
Telegram bot/channel
   ↓
OpenClaw native Telegram channel
   ↓
bookbot agent
   ↓
Ollama on host Mac
   ↓
Mistral
   ↓
Response sent back to Telegram
Enter fullscreen mode Exit fullscreen mode

Architecture diagram:

                           ┌──────────────────────────┐
                           │       Telegram App       │
                           │     (user messages)      │
                           └────────────┬─────────────┘
                                        │
                                        │ native Telegram channel
                                        ▼
                    ┌──────────────────────────────────────────┐
                    │          OpenClaw Gateway (VM)           │
                    │                                          │
                    │  Ubuntu VM in UTM                        │
                    │  - OpenClaw                              │
                    │  - native Telegram integration           │
                    │  - bookbot agent                         │
                    │  - cron / scheduling experiments         │
                    └────────────┬─────────────────────────────┘
                                 │
                                 │ model requests
                                 ▼
                 ┌────────────────────────────────────┐
                 │      Ollama on macOS host          │
                 │                                    │
                 │  - local runtime                   │
                 │  - mistral:latest                  │
                 │  - exposed to VM via local IP      │
                 └────────────┬───────────────────────┘
                              │
                              │ local inference
                              ▼
                 ┌────────────────────────────────────┐
                 │       Open source local model      │
                 │            (Mistral 7B)            │
                 └────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Having the model run on macOS keeps it fast, while having the agent logic inside the VM isolates it for better security:

┌─────────────────────────────────────────────────────────────────┐
│                        MacBook Pro (host)                      │
│                                                                 │
│  ┌──────────────────────────────┐   ┌─────────────────────────┐ │
│  │   Ollama                     │   │   UTM                   │ │
│  │   - mistral:latest          │   │   Ubuntu VM             │ │
│  │   - OLLAMA_HOST=0.0.0.0     │   │   - OpenClaw            │ │
│  │   - port 11434              │   │   - Telegram channel    │ │
│  └───────────────┬──────────────┘   │   - bookbot agent       │ │
│                  │                  └──────────┬──────────────┘ │
│                  └──────── local network ──────┘                │
└─────────────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Resources used:

  • VM: 6GB RAM, 50GB disk
  • Host for Ollama: 6–8GB free RAM + ~15GB disk (model + cache)
  • Extra disk buffer: ~10GB
  • Model: one local 7B model

Model choice -> Mistral:

  • Low memory (~5GB in Ollama)
  • Fast on Apple Silicon
  • Stable for agent loops
  • Good enough reasoning + coding

Security:

  • Inside VM:
    • No shared folders initially
    • No SSH keys mounted
    • Use limited permissions
  • Between VM ↔ host:
    • Only expose Ollama port (11434)
    • Restrict to localhost

I am being conservative here with the setup to avoid my machine crashing while working on my everyday tasks, i.e. having browser tabs and other apps running in parallel.

However feel free to tweak this setup according to your own preferences and resources available.

For reference, I am running this experiment on my Macbook Pro M4 24GB RAM 1TB SSD.

Let's begin!


Optimising available resources

Before starting on the actual Openclaw setup, I wanted to optimise my local resources.

1. Free SSD space

I personally found some applications I could easily delete to free up storage. I took advantage of this to do a spring clean and freed up plenty enough space!

2. Free RAM

Check Activity Monitor

My setup wasn't ideal.

Chrome was the biggest culprit!

Some tips to help reduce memory usage by Chrome:

  • Chrome → Settings → Performance → Turn on Memory Saver
  • Kill tabs - Keep <10 active tabs
  • Remove inactive extensions

  • Quit background apps you aren't actively using

  • Clear cache. Recommended to do this occasionally (monthly or before heavy workloads)

  • Restart Mac to clear any memory leaks

By following these steps, I optimised used memory by roughly 4 GB.


Install tools

  • Install Ollama (on host)
brew install ollama
ollama serve
ollama pull mistral

# test
ollama run mistral
Enter fullscreen mode Exit fullscreen mode

  • Setup VM
    • Install UTM
    • Download Ubuntu ARM64 (downloading via the UTM Gallery worked for me)

  • Set the correct resources Memory: 6144 MiB (6GB) CPU: 4 cores

  • Start the VM

  • Clone OpenClaw and install pre-dependencies (in VM)

sudo apt update && sudo apt upgrade -y
sudo apt install -y git curl build-essential

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
export NVM_DIR="$HOME/.nvm"
. "$NVM_DIR/nvm.sh"
nvm install 22
nvm use 22
node -v
npm -v

corepack enable
corepack prepare pnpm@latest --activate
pnpm -v

git clone https://github.com/openclaw/openclaw.git
cd ~/openclaw

pnpm install
pnpm build
Enter fullscreen mode Exit fullscreen mode

  • Install OpenClaw
# Find Mac IP and set in Ollama base URL during Openclaw onboarding
# run cmd below in terminal on host
ifconfig | grep inet
# look for something like: `192.168.x.x`

pnpm openclaw onboard --install-daemon
Enter fullscreen mode Exit fullscreen mode

During onboarding select:

Skip the rest of the remaining steps for now: channels, search providers, skills, hooks, hatch mode.

  • Test it manually after installation
pnpm openclaw agent --agent main --message "Say hello in one sentence." --thinking low
Enter fullscreen mode Exit fullscreen mode

It should work and you get a response back from the agent!

The OpenClaw dashboard is useful to monitor for statuses and logs. You can access it on port 18789.

  • Install the Telegram app on your device and create your bot using @BotFather
/start
/newbot
Enter fullscreen mode Exit fullscreen mode

After the new bot is created BotFather will provide you with a bot token. Keep it safe as we will need it for later when configuring Openclaw.


Configure the experiment

From this point on, I will proceed with my own business logic for OpenBooks. But feel free to implement the business logic for your own application.

Need inspiration for your own experiments?
This useful github repo showcases many potential implementations made possible with Openclaw: https://github.com/hesamsheikh/awesome-openclaw-usecases

Step 1 — Configure native Telegram channel

Now that OpenClaw is installed and your bot is created via @BotFather, let’s connect Telegram.

Add your Telegram bot to your Openclaw config:

~/.openclaw/openclaw.json

"channels": {
  "telegram": {
    "enabled": true,
    "botToken": "INSERT_YOUR_BOT_TOKEN"
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart the gateway:

pnpm openclaw gateway restart
Enter fullscreen mode Exit fullscreen mode

Send a message to your bot in Telegram. You should now get a response from OpenClaw.

Step 2 — Create BookBot agent + workspace

Instead of using the default main agent, we create a dedicated agent for our experiment.

pnpm openclaw agents add bookbot
Enter fullscreen mode Exit fullscreen mode

Bind Telegram to this agent:

pnpm openclaw agents bind --agent bookbot --bind telegram
pnpm openclaw agents bindings

# You should see: bookbot <- telegram
Enter fullscreen mode Exit fullscreen mode

Now configure the agent behavior via its workspace:

# SOUL.md

You are BookBot.

You are a Telegram book recommendation assistant.

Recommend 3 books in this format:

📚 Recommendations

1. Title — Author  
Why: short reason  

2. Title — Author  
Why: short reason  

3. Title — Author  
Why: short reason  

Rules:
- No long intro
- No tool descriptions
- Keep it concise
Enter fullscreen mode Exit fullscreen mode
# AGENTS.md

Interpret the following as recommendation requests:

- recommend me a book
- what should I read next
- books for developers
- books like [X]

If the request is clear → recommend 3 books  
If vague → ask 1 short clarifying question
Enter fullscreen mode Exit fullscreen mode

Restart the gateway and test in Telegram:

Recommend 3 books for software engineers
Enter fullscreen mode Exit fullscreen mode

Step 3 — Test scheduling (simple cron)

Before doing anything complex, let’s just prove scheduling works.

Create a one-shot reminder:

pnpm openclaw cron add \
  --name "Test reminder" \
  --at "10m" \
  --message "📚 Reading reminder: ask BookBot for recommendations!" \
  --announce \
  --channel telegram \
  --to "INSERT_YOUR_CHAT_ID"
Enter fullscreen mode Exit fullscreen mode

List jobs:

pnpm openclaw cron list
Enter fullscreen mode Exit fullscreen mode

Test immediately:

pnpm openclaw cron run <JOB_ID>

Enter fullscreen mode Exit fullscreen mode

You should receive a Telegram message confirming that cron works!

Step 4 — Daily recurring cron job

Now convert this into a daily habit (adapt to your tz)

pnpm openclaw cron add \
  --name "Daily reading reminder" \
  --cron "0 9 * * *" \
  --tz "America/Toronto" \
  --message "📚 Daily reminder: ask BookBot for 3 new book recommendations." \
  --announce \
  --channel telegram \
  --to "INSERT_YOUR_CHAT_ID"
Enter fullscreen mode Exit fullscreen mode

Test it:

pnpm openclaw cron run <JOB_ID>
Enter fullscreen mode Exit fullscreen mode

🎉 We are done!

You now have:

  • a local AI agent
  • connected to Telegram
  • powered by a local model
  • with daily automation

All running on your own machine, for free.


Start and stop the experiment

Note: the setup runs locally, meaning that if your machine or VM shuts off then Openclaw will stop working.

Here are some tips to avoid this:

On host:

caffeinate
Enter fullscreen mode Exit fullscreen mode

Prevent VM from going to sleep by disabling systemd sleep:

sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
Enter fullscreen mode Exit fullscreen mode

Start Openclaw agent & detach using tmux (in VM):

sudo apt install -y tmux

tmux new -s openclaw

cd ~/openclaw
openclaw gateway start
Enter fullscreen mode Exit fullscreen mode

Press CTRL + B, then D to detach

Stop Openclaw agent (in VM):

tmux attach -t openclaw
openclaw gateway stop
Enter fullscreen mode Exit fullscreen mode

On your host don't forget to stop serving the model via Ollama and shut off your VM when not in use to free resources.


Troubleshooting

  • To avoid unnecessary model calls and background noise, and if there is no explicit need for it, I would recommend disabling the heartbeat via the OpenClaw config and relying on an explicit cron job:
...
"heartbeat": {
  "every": "0m"
}
...
Enter fullscreen mode Exit fullscreen mode
  • Move from model-driven skills to tool-dispatched skills so they bypass the model and call a deterministic local tool instead. This improves accuracy of outputs. Getting deterministic outputs felt like a struggle with OpenClaw in general, this helps.

  • Create a dedicated agent + routing binding vs hijacking main agent for more specific logic


Glossary of OpenClaw commands I found useful

# Gateway
pnpm openclaw gateway start
pnpm openclaw gateway restart

# Agents
pnpm openclaw agents add bookbot
pnpm openclaw agents bind --agent <AGENT> --bind <CHANNEL>
pnpm openclaw agents bindings

# Cron
pnpm openclaw cron add ...
pnpm openclaw cron list
pnpm openclaw cron run <JOB_ID>

# Debugging
journalctl --user -u openclaw-gateway.service -f
pnpm openclaw doctor --fix
Enter fullscreen mode Exit fullscreen mode

Future improvements

  • Improve functionalities, add memory capability to unlock insights
    “Compare this book to the last 3 books I read”
    “Turn my reading history into themes and recommendations”

  • Consider a hybrid strategy where the setup is less resource intense during day as I work in parallel on manual tasks, but more resource intense and high performance at night while I am asleep for longer experiments.


Discussions

  • What are you using Openclaw for?
  • What does your setup look like?
  • Any tips/hacks with Openclaw to share? I have only started exploring this tool :)

🦞 Thanks for following along, I hope you enjoyed this tutorial! 🦞

Top comments (0)