DEV Community

Paddy Pits
Paddy Pits

Posted on

How I Built a Self-Managing Telegram Bot Army Using Only Free AI APIs

Last year I was paying for ChatGPT Plus, a Claude subscription, and two SaaS tools just to keep my content and client work moving. Every month: £60+ gone before I'd made a penny. As a UK solopreneur with a growing TikTok presence but no VC backing, that adds up fast.

So I did what any slightly obsessive builder would do — I went down a rabbit hole for three weeks and came out the other side with something I didn't expect: a fully functional, self-managing Telegram bot army running entirely on free AI APIs.

This is how I built it, what it actually does, and how you can replicate the core of it yourself.


The Problem: API Costs Kill Momentum

The dirty secret of "AI automation" content online is that most of it assumes you're happy to spend $50–$200/month on API credits before you've validated anything. For someone building side projects or running a lean business, that's a non-starter.

I needed a setup that could:

  • Handle customer queries without me
  • Generate content drafts on demand
  • Capture and qualify leads
  • Run 24/7 without babysitting

And I needed it to cost as close to £0/month as possible.


The Solution: Stack the Free Tiers

Here's what I discovered: the free tiers available right now are genuinely powerful, and almost nobody talks about stacking them together intelligently.

Google AI Studio (Gemini free tier)
Gemini 1.5 Flash gives you 15 requests per minute and 1 million tokens per day on the free tier. For a Telegram bot handling conversational queries, that's enormous headroom for a small business. This became the backbone of my main assistant bot, NemoClaw.

Groq
Groq's free tier gives access to Llama 3.3 70B — a genuinely capable open-source model — with generous rate limits. The inference speed is remarkable. I use this for anything needing fast, factual responses where latency matters.

OpenRouter Free Tier
OpenRouter aggregates multiple model providers and surfaces several free models (look for the :free suffix in model IDs). It's useful as a fallback layer and for experimenting with different model personalities across bots.

Stack these three and you have a robust, multi-model AI backend with meaningful redundancy — for free.


The Bots I Built

Over the course of a few months, I built out what I now call the AiFusionX Bot Army:

  • NemoClaw — AI assistant bot, the flagship. Handles FAQs, product questions, general chat. Powered by Gemini.
  • SalesBot — Qualification and conversion. Walks prospects through questions and delivers tailored offers.
  • ContentBot — Generates captions, hooks, and post drafts on command.
  • LeadGenBot — Captures leads from Telegram groups and channels into a structured list.
  • Nanobot — Lightweight utility bot for quick tasks (summarise, rewrite, translate).
  • Hermes — Message routing and broadcast bot for announcements to subscribers.

Each bot has a single job. That focus is the key — trying to build one bot that does everything creates a mess.


How the Telegram + Gemini Integration Works

Here's a stripped-down version of how NemoClaw connects Telegram to the Gemini API:

import google.generativeai as genai
from telegram import Update
from telegram.ext import ApplicationBuilder, MessageHandler, filters, ContextTypes

# Configure Gemini (free API key from Google AI Studio)
genai.configure(api_key="YOUR_GEMINI_API_KEY")
model = genai.GenerativeModel("gemini-1.5-flash")

conversation_history = {}

async def handle_message(update: Update, context: ContextTypes.DEFAULT_TYPE):
    user_id = update.effective_user.id
    user_message = update.message.text

    if user_id not in conversation_history:
        conversation_history[user_id] = []

    conversation_history[user_id].append({
        "role": "user",
        "parts": [user_message]
    })

    chat = model.start_chat(history=conversation_history[user_id][:-1])
    response = chat.send_message(
        user_message,
        generation_config={"temperature": 0.7, "max_output_tokens": 500}
    )

    bot_reply = response.text
    conversation_history[user_id].append({
        "role": "model",
        "parts": [bot_reply]
    })

    await update.message.reply_text(bot_reply)

app = ApplicationBuilder().token("YOUR_TELEGRAM_BOT_TOKEN").build()
app.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, handle_message))
app.run_polling()
Enter fullscreen mode Exit fullscreen mode

The Telegram bot token comes from @BotFather, the Gemini key from Google AI Studio — both free, both set up in under 10 minutes.

From this base you can layer in:

  • A system prompt that gives the bot a persona and specific knowledge
  • Rate limiting per user to manage free tier quotas
  • A Groq fallback if Gemini hits rate limits
  • Webhook deployment on a free Railway or Render instance

Results

  • Reduced my API spend to £0/month for the core stack
  • NemoClaw handles the majority of first-contact questions from people finding me through TikTok — without me touching my phone
  • ContentBot drafts save me 2–3 hours a week on captions and hooks
  • I went from reactive (responding to every DM manually) to running a system

When 500k+ views hit a video and people pile into my Telegram, the bots handle the initial wave. That was simply not possible before without spending serious money.


Try It Yourself

If you want to test what a properly configured AI Telegram bot feels like before building your own, try NemoClaw free: t.me/NemoClaw77bot

Want a done-for-you bot army for your business? Everything is packaged at aifusionx-bot-army.netlify.app — starter packages from £297. Use code LAUNCH24 for 25% off.

Free community (builds, prompts, automation breakdowns): skool.com/ai-automation-hub-8766/about

The free tiers are genuinely good right now. Build while they last.

— Paddy

Top comments (0)