OpenClaw's creator, Peter Steinberger, just joined OpenAI. The project will live on as an open-source foundation, but when the person who built the thing moves to a company with its own agent ambitions, it's worth looking at what else is out there.
PicoClaw launched the same week, claims it can run a full AI agent on a $10 Raspberry Pi with less than 10MB of RAM, and it's already sitting at 12K GitHub stars. That's a bold pitch for a framework that's barely a week old.
I wanted to see if it holds up.
So I grabbed the cheapest ARM server Hetzner sells (€3.79/month), installed PicoClaw from scratch, connected it to OpenRouter for LLM access, and wired up a Telegram bot so I can talk to it from my phone.
This is the complete walkthrough of that process, every step from a blank server to a working AI assistant.
Follow me on X for more like this
Prerequisites
Before starting, make sure you have:
- A Hetzner server or any other VPS;
- An OpenRouter account and API key;
- A Telegram account (for bot setup).
Fair warning: I wrote this so that anyone can follow along, even if you've never touched a terminal before. Every command is copy-paste ready. But "easy to follow" doesn't mean "zero context needed." You're spinning up an AI agent on a server you control. You should at least know what a VPS is, what an API key does, and not panic when you see a command line. If that's you, let's go.
Installing Dependencies
First, SSH into your server and install the required build tools. PicoClaw is written in Go, so you'll need make to handle the build process.
apt update
apt install make -y
This installs the make utility, which will orchestrate the compilation process defined in PicoClaw's Makefile.
Cloning and Building PicoClaw
Clone the PicoClaw repository from GitHub:
apt update
apt install git -y
git clone https://github.com/sipeed/picoclaw.git
Navigate into the project directory and install Go dependencies:
cd picoclaw
apt install golang-go -y
make deps
This command downloads and installs all Go modules required by the project (similar to npm install for Node.js or pip install -r requirements.txt for Python).
Once dependencies are installed, compile the source code:
make build
This command compiles all .go files into a single picoclaw binary executable.
Finally, install the binary to your system path:
make install
The make install command copies the compiled binary to /usr/local/bin/ (or similar), making it accessible from anywhere on your system. That’s it, now let’s connect it to a model.
Connecting to OpenRouter
PicoClaw needs an LLM provider to function. This guide uses OpenRouter as an AI gateway, which provides access to multiple models through a single API.
Getting Your OpenRouter API Key
- Sign up at https://openrouter.ai/
- Navigate to https://openrouter.ai/settings/keys
- Create a new API key (name it something memorable like "PicoClaw")
- Copy the key immediately as you won't be able to see it again:
Configuring the API Connection
Create the configuration file with your API credentials. Replace YOUR_OPENROUTER_API_KEY with the key you just copied:
cat > /root/.picoclaw/config.json << 'EOF'
{
"agents": {
"defaults": {
"workspace": "~/.picoclaw/workspace",
"model": "google/gemini-3-pro-preview",
"max_tokens": 8192,
"temperature": 0.7,
"max_tool_iterations": 20
}
},
"providers": {
"openrouter": {
"api_key": "YOUR_OPENROUTER_API_KEY",
"api_base": "https://openrouter.ai/api/v1"
}
}
}
EOF
Security Warning: This configuration stores your API key in plain text. Ensure your server is properly secured with SSH key authentication, firewall rules, and restricted user access. Anyone with file system access can read this key.
Testing the Connection
Verify that PicoClaw can communicate with OpenRouter:
picoclaw agent -m "Hello, are you working?"
This did not work for me on the first try because the binary is installed to /root/.local/bin/picoclaw, but that directory wasn't in my PATH. So I ran:
echo 'export PATH=$PATH:/root/.local/bin' >> ~/.bashrc
source ~/.bashrc
This fixed the issue and I got a response!
If configured correctly, you should receive a response from the AI model as well.
Setting Up Telegram Integration
Running PicoClaw from the command line works, but integrating it with Telegram provides a much more convenient interface for daily use.
Creating a Telegram Bot
- Open Telegram and search for
@BotFather - Send the command
/newbot - Follow the prompts to choose a name and username for your bot
- Copy the bot token (format:
123456789:ABCdefGHIjklMNOpqrsTUVwxyz)
Getting Your Telegram User ID
To restrict bot access to only your account:
- Search for
@userinfoboton Telegram - Send it any message
- Copy the numeric
Idit returns (e.g.,123456789)
Configuring Telegram Access
Install jq for JSON manipulation:
apt install jq -y
Update the PicoClaw configuration to enable Telegram integration. Replace YOUR_TELEGRAM_BOT_TOKEN and YOUR_TELEGRAM_USER_ID with your actual values:
jq '
.channels.telegram = {
"enabled": true,
"token": "8534006675:AAEQasMkndYFB_039eT2q8nwEFe1IY7ZJWY",
"allowFrom": ["1606858324"]
}
' /root/.picoclaw/config.json > /tmp/config.json \
&& mv /tmp/config.json /root/.picoclaw/config.json
The allowFrom array restricts who can use your bot. Only Telegram user IDs listed here will be able to interact with your AI agent.
Starting the Gateway
Launch the PicoClaw gateway to activate the Telegram bot:
picoclaw gateway
Once the gateway is running, open Telegram, navigate to your bot, and click Start. You should now be able to chat with your PicoClaw agent directly through Telegram.
Running PicoClaw as a Background Service
By default, PicoClaw runs in the foreground and stops when you close the terminal or press Ctrl+C. To keep it running permanently, set it up as a systemd service.
Creating the Service File
Create a systemd service definition:
cat > /etc/systemd/system/picoclaw.service << 'EOF'
[Unit]
Description=PicoClaw Gateway
After=network.target
[Service]
ExecStart=/root/.local/bin/picoclaw gateway
Restart=always
RestartSec=5
Environment=HOME=/root
[Install]
WantedBy=multi-user.target
EOF
Enabling and Starting the Service
Reload systemd to recognize the new service:
systemctl daemon-reload
Enable the service to start automatically on boot:
systemctl enable picoclaw
Start the service immediately:
systemctl start picoclaw
Monitoring the Service
Check the service status:
systemctl status picoclaw
View real-time logs:
journalctl -u picoclaw -f
PicoClaw now runs in the background, automatically starts on server reboot, and restarts itself if it crashes.
Your PicoClaw is Up-and-Running
You now have a fully functional PicoClaw installation running on your VPS, accessible through Telegram from anywhere. This setup provides a personal AI assistant with persistent conversation history and the flexibility to customize the underlying model through OpenRouter.
About the Experiment: This installation serves as a head-to-head comparison with my OpenClaw instance. While PicoClaw appears more optimized and lightweight on paper, OpenClaw has been my reliable daily driver. I'll be monitoring both over the next few days to evaluate:
- Resource consumption (CPU, RAM)
- Response times and reliability
- Model routing efficiency through OpenRouter
- Overall user experience
Follow me on X to keep up with what I do and this experiment's results.
The main wildcard is whether OpenAI's involvement will transform OpenClaw into a proprietary "Claw" product. If that happens, PicoClaw's open-source nature makes it the more sustainable long-term choice for self-hosted AI agents.











Top comments (0)