DEV Community

Apollo
Apollo

Posted on

The fastest way to build a Telegram Bot natively

Building Native Telegram Bots at Lightning Speed: A Technical Deep Dive

Telegram bots have become indispensable tools for automation, customer service, and interactive experiences. While many frameworks exist, building natively with Telegram's Bot API offers unparalleled performance and control. This guide will walk you through creating a high-performance Telegram bot from scratch.

Understanding Telegram's Bot Architecture

Telegram bots operate via HTTPS requests to Telegram's Bot API. The two primary methods for receiving updates are:

  1. Webhooks (Recommended for production)
  2. Long Polling (Good for development)

For maximum performance, we'll focus on webhooks with these key components:

  • Fast HTTP server (We'll use Node.js with Express)
  • Efficient update processing
  • Proper error handling
  • Rate limiting considerations

Setting Up Your Development Environment

Before coding, ensure you have:

  1. Node.js v18+ installed
  2. ngrok for local tunneling (for webhook testing)
  3. A Telegram bot token from @BotFather
# Initialize project
mkdir telegram-bot && cd telegram-bot
npm init -y
npm install express axios body-parser
Enter fullscreen mode Exit fullscreen mode

Core Bot Implementation

Create index.js with this optimized structure:

const express = require('express');
const axios = require('axios');
const bodyParser = require('body-parser');

const BOT_TOKEN = process.env.BOT_TOKEN || 'YOUR_BOT_TOKEN';
const PORT = process.env.PORT || 3000;
const SECRET_PATH = `/webhook/${require('crypto').randomBytes(20).toString('hex')}`;

const app = express();
app.use(bodyParser.json());

// Telegram API client
const telegram = axios.create({
  baseURL: `https://api.telegram.org/bot${BOT_TOKEN}`,
  timeout: 5000,
});

// Webhook setup endpoint
app.get('/setup-webhook', async (req, res) => {
  try {
    const { data } = await telegram.post('/setWebhook', {
      url: `${req.query.base_url}${SECRET_PATH}`,
      secret_token: SECRET_PATH.split('/').pop(),
    });
    res.json(data);
  } catch (error) {
    res.status(500).json({ error: error.message });
  }
});

// Webhook handler
app.post(SECRET_PATH, async (req, res) => {
  // Immediate response to prevent retries
  res.status(200).end();

  const update = req.body;
  if (!update.message) return;

  // Process message in background
  processUpdate(update).catch(console.error);
});

async function processUpdate(update) {
  const { chat, text } = update.message;

  // Implement your bot logic here
  if (text === '/start') {
    await sendMessage(chat.id, 'Welcome to the fastest Telegram bot!');
  } else {
    await sendMessage(chat.id, `You said: ${text}`);
  }
}

async function sendMessage(chatId, text) {
  try {
    await telegram.post('/sendMessage', {
      chat_id: chatId,
      text,
      parse_mode: 'MarkdownV2',
    });
  } catch (error) {
    console.error('Message send error:', error.response?.data);
  }
}

app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
  console.log(`Webhook path: ${SECRET_PATH}`);
});
Enter fullscreen mode Exit fullscreen mode

Performance Optimization Techniques

1. Efficient Update Processing

// Queue system for high throughput
const { AsyncQueue } = require('async-queue');
const updateQueue = new AsyncQueue({ concurrency: 10 });

app.post(SECRET_PATH, (req, res) => {
  res.status(200).end();
  updateQueue.push(() => processUpdate(req.body));
});
Enter fullscreen mode Exit fullscreen mode

2. Caching User Data

const NodeCache = require('node-cache');
const userCache = new NodeCache({ stdTTL: 3600 });

async function processUpdate(update) {
  const { from, chat, text } = update.message;

  // Cache user data to reduce DB calls
  if (!userCache.get(from.id)) {
    userCache.set(from.id, {
      firstName: from.first_name,
      lastName: from.last_name,
      username: from.username,
    });
  }

  // Use cached data
  const user = userCache.get(from.id);
  // ... rest of processing
}
Enter fullscreen mode Exit fullscreen mode

3. Bulk Operations

For handling multiple messages efficiently:

async function sendBulkMessages(chatIds, text) {
  const promises = chatIds.map(chatId => 
    telegram.post('/sendMessage', {
      chat_id: chatId,
      text,
    }).catch(e => null) // Silently fail individual messages
  );

  return Promise.all(promises);
}
Enter fullscreen mode Exit fullscreen mode

Advanced Features Implementation

1. Inline Keyboards

async function sendMenu(chatId) {
  await telegram.post('/sendMessage', {
    chat_id: chatId,
    text: 'Choose an option:',
    reply_markup: {
      inline_keyboard: [
        [
          { text: 'Option 1', callback_data: 'opt1' },
          { text: 'Option 2', callback_data: 'opt2' }
        ],
        [{ text: 'Cancel', callback_data: 'cancel' }]
      ]
    }
  });
}

// Handle callback queries
app.post(SECRET_PATH, async (req, res) => {
  res.status(200).end();

  if (req.body.callback_query) {
    const { id, data, message } = req.body.callback_query;
    await telegram.post('/answerCallbackQuery', { callback_query_id: id });
    await sendMessage(message.chat.id, `You selected: ${data}`);
  }
  // ... rest of handler
});
Enter fullscreen mode Exit fullscreen mode

2. File Handling

async function sendDocument(chatId, fileBuffer, filename) {
  const formData = new FormData();
  formData.append('chat_id', chatId);
  formData.append('document', fileBuffer, { filename });

  await telegram.post('/sendDocument', formData, {
    headers: formData.getHeaders()
  });
}
Enter fullscreen mode Exit fullscreen mode

Deployment Best Practices

  1. Use a reverse proxy (Nginx) for SSL termination
  2. Implement proper logging:
const { createLogger, transports } = require('winston');
const logger = createLogger({
  transports: [
    new transports.File({ filename: 'bot.log' }),
    new transports.Console()
  ]
});

// Replace all console.log with logger.info
Enter fullscreen mode Exit fullscreen mode
  1. Set up monitoring:
    • Track response times
    • Monitor error rates
    • Alert on downtime

Webhook Security

// Add webhook secret validation
app.post(SECRET_PATH, (req, res) => {
  const secret = req.headers['x-telegram-bot-api-secret-token'];
  if (secret !== SECRET_PATH.split('/').pop()) {
    return res.status(403).end();
  }
  // ... rest of handler
});
Enter fullscreen mode Exit fullscreen mode

Benchmarking Your Bot

To test your bot's performance:

const autocannon = require('autocannon');

async function runBenchmark() {
  const result = await autocannon({
    url: 'http://localhost:3000/webhook',
    method: 'POST',
    headers: {
      'content-type': 'application/json',
      'x-telegram-bot-api-secret-token': SECRET_PATH.split('/').pop()
    },
    body: JSON.stringify({
      update_id: 1,
      message: {
        message_id: 1,
        from: { id: 1, first_name: 'Test' },
        chat: { id: 1 },
        text: '/start'
      }
    }),
    connections: 10,
    duration: 10
  });
  console.log(result);
}

runBenchmark();
Enter fullscreen mode Exit fullscreen mode

Going Further: Advanced Patterns

  1. Implement middleware:
const middlewares = [
  authMiddleware,
  rateLimitMiddleware,
  loggingMiddleware
];

app.post(SECRET_PATH, async (req, res) => {
  try {
    for (const middleware of middlewares) {
      await middleware(req);
    }
    // ... main handler
  } catch (error) {
    // Handle middleware errors
  }
});
Enter fullscreen mode Exit fullscreen mode
  1. Database integration:
const { MongoClient } = require('mongodb');
const client = new MongoClient(process.env.MONGO_URI);

async function connectDB() {
  await client.connect();
  return client.db('telegram_bot');
}

// In your handler:
const db = await connectDB();
await db.collection('messages').insertOne(update.message);
Enter fullscreen mode Exit fullscreen mode

Conclusion

This native implementation provides a solid foundation for building high-performance Telegram bots. Key takeaways:

  1. Webhooks offer better performance than long polling
  2. Queue systems prevent overload during traffic spikes
  3. Caching reduces redundant operations
  4. Proper error handling ensures reliability

For production deployments, consider:

  • Containerization with Docker
  • Horizontal scaling
  • Continuous monitoring

The complete code is available on GitHub. Happy bot building!


Stop Reinventing The Wheel

If you want to skip the boilerplate and launch your app today, check out my Ultimate AI Micro-SaaS Boilerplate ($49). It includes full Stripe integration, Next.js, and an external API suite.

Or, let my AI teardown your existing funnels at Apollo Roaster.

Top comments (0)