DEV Community

Anatoly (Vensus)
Anatoly (Vensus)

Posted on

Build AI-Powered Telegram bots without code: Introducing Coreness

Hey DEV community! πŸ‘‹

This is my first post here, so a quick intro: I'm a developer who got tired of rebuilding the same Telegram bot infrastructure again and again. Each new bot meant redoing webhooks, storage, admin tooling, and deploymentβ€”or paying for SaaS products with limited flexibility.

I wanted a self-hosted solution where bot behavior is easy to change without rewriting code every time. So I built Coreness β€” an event-driven platform for deploying AI-powered Telegram bots using declarative YAML scenarios. And now I'm making it open source.

GitHub: https://github.com/Vensus137/Coreness
Documentation: docs.coreness.tech

Note on language: The project supports English (docs, code, tooling). You may still run into occasional inaccuracies or mixed-language bits.

The Problem

If you've ever built Telegram bots (especially more than one), you probably recognize these pain points:

  • Rebuilding the same basics every time: webhook handling, storage, user state, admin utilities
  • SaaS limitations: you get convenience, but you're locked into someone else’s feature set and pricing
  • Scaling overhead: multiple bots often become multiple deployments, multiple databases, multiple headaches
  • AI integration friction: wiring LLMs, context management, and RAG tends to become a project of its own

Coreness is the infrastructure layer I wanted: one platform instance that can run multiple isolated bots, while still being configurable enough for real-world scenarios.

Introducing Coreness

Coreness is a multi-tenant platform where you describe bot behavior in YAML, and the platform handles execution, storage, and integrations. A single server instance can run multiple isolated tenants (bots), each with its own configuration and data.

What you get:

  • 🎯 YAML-based scenarios β€” no code, just configuration
  • 🏒 Built-in multi-tenancy β€” complete data isolation via PostgreSQL Row-Level Security
  • πŸ€– AI integration β€” OpenAI, Anthropic, Google, DeepSeek support via aggregators
  • πŸ“š RAG out of the box β€” vector search with pgvector
  • ⏰ Scheduled scenarios β€” cron-style automation
  • πŸ”Œ Plugin architecture β€” extend features cleanly
  • πŸ’³ Payment handling β€” Telegram Stars and other providers

How It Works

Instead of writing code, you describe bot behavior declaratively. Here's a simple bot that responds to /start:

start:
  trigger:
    - event_type: "message"
      event_text: "/start"

  step:
    - action: "send_message"
      params:
        text: |
          πŸ‘‹ Hello, {first_name}!

          Welcome to my bot!
        inline:
          - [{"πŸ“‹ Menu": "menu"}, {"ℹ️ Help": "help"}]

menu:
  trigger:
    - event_type: "callback"
      callback_data: "menu"

  step:
    - action: "send_message"
      params:
        text: "Choose an action:"
        inline:
          - [{"πŸ€– About": "about"}]
          - [{"πŸ”™ Back": "start"}]
Enter fullscreen mode Exit fullscreen mode

What's happening here:

  • trigger defines when the scenario runs (command or button press)
  • step is a sequence of actions executed in order
  • {first_name} is a placeholder resolved from user/context data
  • inline defines Telegram inline buttons

The platform automatically handles webhook processing, database storage, and user context. You just describe what should happen.

RAG in Action

Want your bot to answer questions using a knowledge base? Here’s what a basic RAG flow can look like:

ask_question:
  trigger:
    - event_type: "message"

  step:
    # Search for relevant context
    - action: "search_embedding"
      params:
        query_text: "{event_text}"
        document_type: "knowledge"
        limit_chunks: 3
        min_similarity: 0.7

    # Generate AI response with context
    - action: "completion"
      params:
        prompt: "{event_text}"
        system_prompt: "You are a helpful assistant. Answer based on provided context."
        rag_chunks: "{_cache.chunks}"
        model: "gpt-4o-mini"

    # Send response
    - action: "send_message"
      params:
        text: "{_cache.response_completion}"
Enter fullscreen mode Exit fullscreen mode

In this flow, the system:

  • Retrieves relevant chunks from the vector store
  • Builds a context payload for the LLM
  • Sends a completion request to the selected model
  • Returns a contextual response to the user

The key idea: you compose RAG behavior by chaining actions, not by rewriting RAG plumbing for each project.

Multi-tenancy Magic

The platform provides automatic data isolation using PostgreSQL Row-Level Security. Each tenant gets their own sandbox β€” settings, knowledge bases, prompts β€” everything is isolated at the database level.

RLS automatically filters queries by tenant_id, so you never accidentally access another tenant's data. No need to add WHERE tenant_id = ... to every query.

Adding a new bot is simple:

  1. Create a folder like config/tenant/tenant_101/
  2. Add bots/telegram.yaml with bot token (the bots/ folder can hold configs for different bot types)
  3. Add your YAML scenarios
  4. Sync via GitHub or using the Master Bot (a built-in management interface, similar to @botfather, that lets you control tenants, sync configs, and manage the platform from Telegram)

Done. The platform picks it up automatically and starts processing events for that bot.

Getting Started (5-Minute Setup)

Here's how to get your first bot running:

Step 1: Deploy the Platform

Coreness includes the Core Manager utility for deployment and updates. It configures the environment, database, and containers.

# Clone repository
git clone https://github.com/Vensus137/Coreness.git
cd Coreness

# Run Core Manager
python tools/core_manager/core_manager.py
Enter fullscreen mode Exit fullscreen mode

On first run the utility will ask for:

  • Environment (test / prod)
  • Deployment mode (docker / native) β€” native is often easier on Windows; docker is typical on Linux and servers
  • Interface language (English / Русский)

Settings are saved in config/.version. The menu then offers: system update from GitHub (with migrations and backup), database operations (migrations, backup, restore), utility self-update, and language switch.

Step 2: Create a Tenant

Create config/tenant/tenant_101/bots/telegram.yaml:

bot_token: "YOUR_BOT_TOKEN_FROM_BOTFATHER"
is_active: true
Enter fullscreen mode Exit fullscreen mode

Step 3: Configure Scenario

Create config/tenant/tenant_101/scenarios/start.yaml:

start:
  trigger:
    - event_type: "message"
      event_text: "/start"

  step:
    - action: "send_message"
      params:
        text: |
          πŸ‘‹ Hello, {first_name}!

          This is a bot powered by Coreness.
        inline:
          - [{"πŸ“‹ Menu": "menu"}, {"ℹ️ Help": "help"}]
Enter fullscreen mode Exit fullscreen mode

Step 4: Sync

If using GitHub sync:

git add config/tenant/tenant_101/
git commit -m "Add tenant 101"
git push
# Webhook automatically syncs changes
Enter fullscreen mode Exit fullscreen mode

Or manually via Master Bot:

  1. Open master_bot
  2. Send /tenant
  3. Enter tenant ID (101)
  4. Click "Sync"

That's it! Your bot is live and responding to commands.

Bonus: Adding Payments

Want to monetize your bot? Here's how to add Telegram Stars payments:

buy_premium:
  trigger:
    - event_type: "message"
      event_text: "/buy"

  step:
    - action: "create_invoice"
      params:
        title: "Premium Subscription"
        description: "Access to premium features for 1 month"
        amount: 100  # 100 stars
        currency: "XTR"

handle_pre_checkout:
  trigger:
    - event_type: "pre_checkout_query"

  step:
    - action: "confirm_payment"
      params:
        pre_checkout_query_id: "{pre_checkout_query_id}"
        invoice_payload: "{invoice_payload}"

handle_payment_successful:
  trigger:
    - event_type: "payment_successful"

  step:
    - action: "mark_invoice_as_paid"
      params:
        invoice_payload: "{invoice_payload}"
        telegram_payment_charge_id: "{telegram_payment_charge_id}"

    - action: "set_user_storage"
      params:
        key: "premium_active"
        value: true

    - action: "send_message"
      params:
        text: "βœ… Payment successful! Premium activated."
Enter fullscreen mode Exit fullscreen mode

The entire payment flow is declarative β€” no manual payment handling code needed.

Under the Hood

Tech Stack

  • Python 3.11+ with direct Telegram Bot API integration (no aiogram β€” fewer dependencies, better performance)
  • PostgreSQL 16+ with pgvector extension for RAG (or SQLite for simplified version)
  • Docker + docker-compose for deployment
  • LLM aggregators for model access (OpenAI, Anthropic, Google, DeepSeek via OpenRouter, Azure OpenAI)

Why no aiogram? Direct work with Telegram Bot API via aiohttp saves resources, runs faster, and has fewer dependencies. All you need is JSON parsing and HTTP requests.

Architecture

Coreness is built on event-driven architecture with clear layer separation:

Telegram β†’ Event Processor β†’ Scenario Engine β†’ Step Executor β†’ Services β†’ Response
Enter fullscreen mode Exit fullscreen mode

Each service is self-contained and communicates through events. No tangled dependency web, just clean vertical slices of functionality.

Architecture

Plugin System:

Every feature is a separate plugin in the plugins/ folder. Need integration with an external API? Just write a new plugin and drop it in.

plugins/
β”œβ”€β”€ utilities/          # Helper utilities
β”‚   β”œβ”€β”€ foundation/     # Core (logger, plugins_manager)
β”‚   β”œβ”€β”€ telegram/       # Telegram utilities
β”‚   └── core/           # Infrastructure (event_processor, database)
└── services/           # Business services
    β”œβ”€β”€ hub/
    β”‚   β”œβ”€β”€ telegram/   # Bot management (Telegram)
    β”‚   └── tenant_hub/ # Tenant management
    └── ai_service/     # AI and RAG
Enter fullscreen mode Exit fullscreen mode

Plugins are isolated and communicate via events. Add a new plugin β€” the DI container automatically discovers and wires it, and the service registers its actions through action_hub.

Performance

  • Async processing via asyncio β€” all operations are non-blocking
  • Caching of data and settings β€” reduces DB load
  • Vector search optimization with HNSW indexes (pgvector) β€” fast search even on large datasets
  • Parallel processing β€” bots can handle multiple events simultaneously
  • Direct Telegram API β€” no middleware overhead

This matters at scale. One server can handle dozens of bots simultaneously without performance degradation.

Security

  • Row-Level Security for data isolation β€” impossible to accidentally access another tenant's data
  • Validation via Pydantic β€” all input parameters checked against schemas
  • Secrets in environment variables β€” tokens and keys not stored in code
  • Automated DB backups with configurable interval
  • Flexible access control β€” configure read-only users with access to specific tenants

What's Next

I'm making Coreness open source to grow it with the community. Here's what's on the roadmap:

  • Extended RAG capabilities β€” support for files (PDF, DOCX), improved document processing
  • More ready-to-use plugins β€” integrations with popular APIs
  • Simplified Master Bot β€” better tenant management interface
  • Telegram Mini App β€” additional management features through Telegram Mini App

Try It Out

I built Coreness because I needed it for my own projects, and I'm betting others face the same problems. If you're tired of rebuilding bot infrastructure or paying for limited SaaS solutions, give it a try.

GitHub: https://github.com/Vensus137/Coreness
Documentation: docs.coreness.tech
Contact me: @vensus137

⭐ Star the repo if you find it useful

πŸ’¬ Open issues or PRs β€” all feedback helps improve the project

πŸ“’ Share with others who might benefit

I'd love to hear your thoughts and contributions! πŸš€


Coreness β€” Create. Automate. Scale.

Top comments (0)