I needed my product (vaos.sh) to show up in every conversation about OpenClaw memory problems on X. Manually finding and replying to tweets was eating 2 hours a day. So I built a system that does it autonomously.
The Stack
- Bird CLI — free X search using browser cookies (no paid API)
- Codex CLI — GPT-5.4 via my ChatGPT Pro subscription (unlimited tokens, $0 extra)
- Chrome CDP — browser automation for posting (bypasses X API restrictions)
- Supabase — event bus for tracking what's been replied to
- Humanizer rules — anti-AI-slop prompt engineering
Total cost beyond my existing ChatGPT Pro subscription: $0.
How It Works
Every 30 minutes, the system:
- Picks a random search query from 15 OpenClaw-related topics
- Searches X via bird CLI (free, uses browser cookies)
- Filters out tweets I've already replied to
- Sorts by engagement (likes + replies)
- Sends the tweet to GPT-5.4 via Codex with humanizer rules
- GPT drafts a reply under 180 characters that sounds like a tired builder texting at 2am
- Posts via Chrome CDP with the account already logged in
- Logs to Supabase event bus
The Humanizer Prompt
The key insight: LLM-generated replies sound like LLM-generated replies. The humanizer rules fix this:
- No significance inflation or promotional language
- No em dashes (the #1 AI tell)
- No chatbot phrases ("Great question!", "I hope this helps")
- Vary sentence length. Short punchy. Then longer.
- Have opinions. React, don't report.
- Sound like a person texting, not a brand
Example output: "Yeah, stuffing everything into MEMORY.md is a dead end. Context bloats, the agent gets dumb, and you spend half your time re-explaining the repo."
That reads like a human. Because the prompt told the LLM to write like one.
Results
Day 1: The system found and replied to a tweet with 127 likes and 32 replies. The reply was contextually relevant, under 160 characters, and sounded natural.
The system runs via macOS launchd (like cron but persistent). It survives reboots. No server needed.
What I'd Do Differently
- Add engagement tracking so the system learns which reply styles get likes
- Route replies through a Critic agent that rejects anything too promotional
- Add multi-platform support (LinkedIn, Reddit)
The code is part of the VAOS infrastructure at vaos.sh. The agent hosting platform gives your AI persistent memory and behavioral corrections — the same tech that powers this reply system.
*Follow the build in public journey: @StraughterG*Telegram bots are the fastest way to get an AI agent into someone's hands. No app store approval. No web hosting. No frontend to build. The user opens Telegram, sends a message, gets a response.
The problem is that most Telegram bot frameworks give you a stateless loop. Message comes in, response goes out, everything is forgotten. Your bot treats every conversation like meeting a stranger for the first time.
What you need
- A Telegram bot token (from @botfather)
- A model API key (OpenAI, Anthropic, Google — pick one)
- Somewhere to run it 24/7
The "somewhere to run it" part is where most people get stuck. You can use a VPS, but then you're managing uptime, SSL, process managers, and deployments. Docker helps but adds its own complexity. Kubernetes is overkill for a single bot.
The 60-second version
VAOS handles the infrastructure:
- Sign up at vaos.sh
- Paste your Telegram bot token
- Pick a model
- Your bot is live
No Docker. No VPS. No process manager. The bot runs on Fly.io infrastructure with automatic restarts, health checks, and monitoring.
What makes this different from a basic bot
Three things your Telegram bot gets automatically:
Memory. After each conversation, VAOS extracts facts and stores them. Next time the user messages, the bot remembers their name, their project, their preferences. This happens without you writing any code.
Self-correction. When the bot says something wrong, you click "correct this" in the dashboard and write what it should have said. That correction becomes a rule injected at boot. The bot won't make that specific mistake again.
Observability. Every message, every response, every trace is logged. You can see exactly what your bot said, why it said it, and how confident it was. PostHog analytics, Sentry error tracking, and Opik traces come built-in.
The catch
Cold start. For the first few days, the bot doesn't have enough data to be smart. It needs conversations to extract memories from and mistakes to learn corrections from. Plan to actively use it (or have a few testers use it) for the first week.
Also: right now VAOS only supports Telegram. Discord and WhatsApp are coming but aren't hooked up yet.
When to build it yourself
If you need full control over the bot's behavior, custom integrations, or you're running at scale (thousands of concurrent users), build it yourself. The OpenClaw framework is open source and gives you everything you need.
If you want the bot running in under a minute with memory and self-correction built in, use VAOS. 14-day free trial at vaos.sh.
The goal is the same either way: a Telegram bot that gets smarter over time instead of staying exactly as dumb as it was on day one.
Top comments (0)