No paid API. No cloud bill. Just a free NVIDIA endpoint, an open-source agent framework, and a Telegram bot that now answers my questions.
I've been watching the OpenClaw hype since it blew up in early 2026 — 350k GitHub stars, people buying Mac Minis in bulk to run agents on them. I wanted to try it. But I wasn't going to pay for an API key just to experiment.
So I spent an evening figuring out if a genuinely free setup was possible. Here's what I built, what broke, and what actually worked.
What Is OpenClaw?
OpenClaw is an open-source AI agent that runs on your machine. Unlike a chatbot in a browser tab, it can run shell commands, read and write files, schedule tasks, browse the web, and talk to you through Telegram or Discord.
It doesn't care which LLM powers it. You point it at any OpenAI-compatible endpoint and it works. That's what makes this whole setup possible.
The Stack
| Component | What I used | Why |
|---|---|---|
| Agent framework | OpenClaw v2026.4.11 | Open source, large ecosystem |
| LLM | MiniMax M2.7 via NVIDIA | Free GPU-accelerated endpoint |
| Messaging | Telegram | Fast to set up, works well with OpenClaw |
| Server | Local machine (Pop OS) | Oracle free tier was full — more on that below |
Total cost: $0.
What You Need Before Starting
- A machine with at least 4GB RAM
- Node.js v20 or higher
- A free NVIDIA account at build.nvidia.com
- A Telegram account
Step 1: Get Your Free NVIDIA API Key
Sign up at build.nvidia.com. Find MiniMax M2.7 and grab your API key — it starts with nvapi-.
The model I used is a 230 billion parameter mixture-of-experts model. Only 10B parameters activate per token, so it's faster than the size suggests. It handles coding, reasoning, and general questions well enough for personal use.
Step 2: Install OpenClaw
mkdir -p ~/Projects/openclaw && cd ~/Projects/openclaw
npm install -g openclaw
openclaw --version
# OpenClaw 2026.4.11 (769908e)
You'll see deprecation warnings from some old npm packages. They're from OpenClaw's dependencies, not your machine. Ignore them.
Step 3: Initialize the Workspace
openclaw setup
This creates ~/.openclaw/openclaw.json and sets up the workspace directory. That's it.
Step 4: Connect NVIDIA as Your LLM
Most guides assume you have an Anthropic or OpenAI key. We're not doing that.
NVIDIA's endpoint is OpenAI-compatible, so we register it as a custom provider:
openclaw onboard \
--auth-choice custom-api-key \
--custom-base-url "https://integrate.api.nvidia.com/v1" \
--custom-model-id "minimaxai/minimax-m2.7" \
--custom-api-key "YOUR_NVAPI_KEY_HERE" \
--custom-compatibility openai \
--non-interactive \
--accept-risk \
--skip-channels \
--skip-skills
The --non-interactive flag skips the guided wizard, which tries to verify the endpoint and sometimes fails with custom providers. The --accept-risk flag is required in non-interactive mode — read the security docs if you want to know what that actually covers.
Step 5: Fix the Context Window Bug
This is the step that cost me an hour. Read it.
OpenClaw queries NVIDIA to detect the model's context window. It gets back 16000 tokens. It then reserves 16384 tokens for compaction overhead — leaving negative budget for your actual message. Every message fails with a context overflow error.
Fix it by editing the config manually:
nano ~/.openclaw/openclaw.json
Find the model entry under models.providers and change it to:
{
"id": "minimaxai/minimax-m2.7",
"contextWindow": 40000,
"contextTokens": 40000,
"maxTokens": 4096
}
Save with Ctrl+O, exit with Ctrl+X.
Why 40000? MiniMax M2.7 supports 200K context in theory, but NVIDIA's free endpoint has a lower effective limit. 40000 works. You can push it higher and see what happens.
This is a known issue with custom providers in OpenClaw. The context window from the API doesn't reach the compaction path correctly.
Step 6: Create a Telegram Bot
- Open Telegram, search for @botfather
- Send
/newbot - Follow the prompts
- Copy the token it gives you
Connect it:
openclaw channels add --channel telegram --token "YOUR_BOT_TOKEN_HERE"
Step 7: Start the Gateway
openclaw gateway run
You should see:
[gateway] agent model: custom-integrate-api-nvidia-com/minimaxai/minimax-m2.7
[gateway] ready (5 plugins loaded; 2.3s)
[telegram] connected
Open the dashboard in a new terminal tab:
openclaw dashboard
This opens http://127.0.0.1:18789/chat with your auth token already in the URL.
Step 8: Test It
Send "what can you do" in the dashboard. You should get a full response from MiniMax M2.7.
Then message your Telegram bot the same thing. Same response, different channel.
One Thing Worth Knowing
OpenClaw's system prompt is heavy. On a simple "what can you do" message, it used 34% of my context window before I typed a word. That's roughly 13,600 tokens just for framework overhead — tool definitions, skill metadata, workspace files, agent instructions.
For complex automated tasks, that overhead makes sense. For quick questions, it's wasteful. With a 40000 token limit, you have around 26,000 tokens left per session for actual conversation. Long back-and-forth chats will hit compaction eventually.
This isn't a dealbreaker. It's just the tradeoff of a framework built for agentic work.
Where This Goes Next
The agent works. Now I'm building skills on top of it:
NEPSE stock fetcher — send /nepse NRIC to my Telegram bot and get back the live price, 52-week range, and circuit status. Useful since I trade on the Nepal Stock Exchange.
Job application tracker — log applications via Telegram, check status anytime. Useful since I'm actively job searching.
Summary
Local machine (Pop OS, 11GB RAM)
└── OpenClaw v2026.4.11
├── LLM: MiniMax M2.7 via NVIDIA free endpoint
├── Gateway: ws://127.0.0.1:18789
└── Channels: Telegram bot
Total cost: $0
Setup time: ~3 hours (mostly the context window issue)
Lines of config written by hand: 4
Links
Part 2 covers building the NEPSE skill and publishing it to ClawHub.

Top comments (0)