Hey DEV community! π
This is my first post here, so a quick intro: I'm a developer who got tired of rebuilding the same Telegram bot infrastructure again and again. Each new bot meant redoing webhooks, storage, admin tooling, and deploymentβor paying for SaaS products with limited flexibility.
I wanted a self-hosted solution where bot behavior is easy to change without rewriting code every time. So I built Coreness β an event-driven platform for deploying AI-powered Telegram bots using declarative YAML scenarios. And now I'm making it open source.
GitHub: https://github.com/Vensus137/Coreness
Documentation: docs.coreness.tech
Note on language: The project supports English (docs, code, tooling). You may still run into occasional inaccuracies or mixed-language bits.
The Problem
If you've ever built Telegram bots (especially more than one), you probably recognize these pain points:
- Rebuilding the same basics every time: webhook handling, storage, user state, admin utilities
- SaaS limitations: you get convenience, but you're locked into someone elseβs feature set and pricing
- Scaling overhead: multiple bots often become multiple deployments, multiple databases, multiple headaches
- AI integration friction: wiring LLMs, context management, and RAG tends to become a project of its own
Coreness is the infrastructure layer I wanted: one platform instance that can run multiple isolated bots, while still being configurable enough for real-world scenarios.
Introducing Coreness
Coreness is a multi-tenant platform where you describe bot behavior in YAML, and the platform handles execution, storage, and integrations. A single server instance can run multiple isolated tenants (bots), each with its own configuration and data.
What you get:
- π― YAML-based scenarios β no code, just configuration
- π’ Built-in multi-tenancy β complete data isolation via PostgreSQL Row-Level Security
- π€ AI integration β OpenAI, Anthropic, Google, DeepSeek support via aggregators
- π RAG out of the box β vector search with pgvector
- β° Scheduled scenarios β cron-style automation
- π Plugin architecture β extend features cleanly
- π³ Payment handling β Telegram Stars and other providers
How It Works
Instead of writing code, you describe bot behavior declaratively. Here's a simple bot that responds to /start:
start:
trigger:
- event_type: "message"
event_text: "/start"
step:
- action: "send_message"
params:
text: |
π Hello, {first_name}!
Welcome to my bot!
inline:
- [{"π Menu": "menu"}, {"βΉοΈ Help": "help"}]
menu:
trigger:
- event_type: "callback"
callback_data: "menu"
step:
- action: "send_message"
params:
text: "Choose an action:"
inline:
- [{"π€ About": "about"}]
- [{"π Back": "start"}]
What's happening here:
-
triggerdefines when the scenario runs (command or button press) -
stepis a sequence of actions executed in order -
{first_name}is a placeholder resolved from user/context data -
inlinedefines Telegram inline buttons
The platform automatically handles webhook processing, database storage, and user context. You just describe what should happen.
RAG in Action
Want your bot to answer questions using a knowledge base? Hereβs what a basic RAG flow can look like:
ask_question:
trigger:
- event_type: "message"
step:
# Search for relevant context
- action: "search_embedding"
params:
query_text: "{event_text}"
document_type: "knowledge"
limit_chunks: 3
min_similarity: 0.7
# Generate AI response with context
- action: "completion"
params:
prompt: "{event_text}"
system_prompt: "You are a helpful assistant. Answer based on provided context."
rag_chunks: "{_cache.chunks}"
model: "gpt-4o-mini"
# Send response
- action: "send_message"
params:
text: "{_cache.response_completion}"
In this flow, the system:
- Retrieves relevant chunks from the vector store
- Builds a context payload for the LLM
- Sends a completion request to the selected model
- Returns a contextual response to the user
The key idea: you compose RAG behavior by chaining actions, not by rewriting RAG plumbing for each project.
Multi-tenancy Magic
The platform provides automatic data isolation using PostgreSQL Row-Level Security. Each tenant gets their own sandbox β settings, knowledge bases, prompts β everything is isolated at the database level.
RLS automatically filters queries by tenant_id, so you never accidentally access another tenant's data. No need to add WHERE tenant_id = ... to every query.
Adding a new bot is simple:
- Create a folder like
config/tenant/tenant_101/ - Add
bots/telegram.yamlwith bot token (thebots/folder can hold configs for different bot types) - Add your YAML scenarios
- Sync via GitHub or using the Master Bot (a built-in management interface, similar to @botfather, that lets you control tenants, sync configs, and manage the platform from Telegram)
Done. The platform picks it up automatically and starts processing events for that bot.
Getting Started (5-Minute Setup)
Here's how to get your first bot running:
Step 1: Deploy the Platform
Coreness includes the Core Manager utility for deployment and updates. It configures the environment, database, and containers.
# Clone repository
git clone https://github.com/Vensus137/Coreness.git
cd Coreness
# Run Core Manager
python tools/core_manager/core_manager.py
On first run the utility will ask for:
- Environment (test / prod)
- Deployment mode (docker / native) β native is often easier on Windows; docker is typical on Linux and servers
- Interface language (English / Π ΡΡΡΠΊΠΈΠΉ)
Settings are saved in config/.version. The menu then offers: system update from GitHub (with migrations and backup), database operations (migrations, backup, restore), utility self-update, and language switch.
Step 2: Create a Tenant
Create config/tenant/tenant_101/bots/telegram.yaml:
bot_token: "YOUR_BOT_TOKEN_FROM_BOTFATHER"
is_active: true
Step 3: Configure Scenario
Create config/tenant/tenant_101/scenarios/start.yaml:
start:
trigger:
- event_type: "message"
event_text: "/start"
step:
- action: "send_message"
params:
text: |
π Hello, {first_name}!
This is a bot powered by Coreness.
inline:
- [{"π Menu": "menu"}, {"βΉοΈ Help": "help"}]
Step 4: Sync
If using GitHub sync:
git add config/tenant/tenant_101/
git commit -m "Add tenant 101"
git push
# Webhook automatically syncs changes
Or manually via Master Bot:
- Open
master_bot - Send
/tenant - Enter tenant ID (101)
- Click "Sync"
That's it! Your bot is live and responding to commands.
Bonus: Adding Payments
Want to monetize your bot? Here's how to add Telegram Stars payments:
buy_premium:
trigger:
- event_type: "message"
event_text: "/buy"
step:
- action: "create_invoice"
params:
title: "Premium Subscription"
description: "Access to premium features for 1 month"
amount: 100 # 100 stars
currency: "XTR"
handle_pre_checkout:
trigger:
- event_type: "pre_checkout_query"
step:
- action: "confirm_payment"
params:
pre_checkout_query_id: "{pre_checkout_query_id}"
invoice_payload: "{invoice_payload}"
handle_payment_successful:
trigger:
- event_type: "payment_successful"
step:
- action: "mark_invoice_as_paid"
params:
invoice_payload: "{invoice_payload}"
telegram_payment_charge_id: "{telegram_payment_charge_id}"
- action: "set_user_storage"
params:
key: "premium_active"
value: true
- action: "send_message"
params:
text: "β
Payment successful! Premium activated."
The entire payment flow is declarative β no manual payment handling code needed.
Under the Hood
Tech Stack
- Python 3.11+ with direct Telegram Bot API integration (no aiogram β fewer dependencies, better performance)
- PostgreSQL 16+ with pgvector extension for RAG (or SQLite for simplified version)
- Docker + docker-compose for deployment
- LLM aggregators for model access (OpenAI, Anthropic, Google, DeepSeek via OpenRouter, Azure OpenAI)
Why no aiogram? Direct work with Telegram Bot API via aiohttp saves resources, runs faster, and has fewer dependencies. All you need is JSON parsing and HTTP requests.
Architecture
Coreness is built on event-driven architecture with clear layer separation:
Telegram β Event Processor β Scenario Engine β Step Executor β Services β Response
Each service is self-contained and communicates through events. No tangled dependency web, just clean vertical slices of functionality.
Plugin System:
Every feature is a separate plugin in the plugins/ folder. Need integration with an external API? Just write a new plugin and drop it in.
plugins/
βββ utilities/ # Helper utilities
β βββ foundation/ # Core (logger, plugins_manager)
β βββ telegram/ # Telegram utilities
β βββ core/ # Infrastructure (event_processor, database)
βββ services/ # Business services
βββ hub/
β βββ telegram/ # Bot management (Telegram)
β βββ tenant_hub/ # Tenant management
βββ ai_service/ # AI and RAG
Plugins are isolated and communicate via events. Add a new plugin β the DI container automatically discovers and wires it, and the service registers its actions through action_hub.
Performance
- Async processing via asyncio β all operations are non-blocking
- Caching of data and settings β reduces DB load
- Vector search optimization with HNSW indexes (pgvector) β fast search even on large datasets
- Parallel processing β bots can handle multiple events simultaneously
- Direct Telegram API β no middleware overhead
This matters at scale. One server can handle dozens of bots simultaneously without performance degradation.
Security
- Row-Level Security for data isolation β impossible to accidentally access another tenant's data
- Validation via Pydantic β all input parameters checked against schemas
- Secrets in environment variables β tokens and keys not stored in code
- Automated DB backups with configurable interval
- Flexible access control β configure read-only users with access to specific tenants
What's Next
I'm making Coreness open source to grow it with the community. Here's what's on the roadmap:
- Extended RAG capabilities β support for files (PDF, DOCX), improved document processing
- More ready-to-use plugins β integrations with popular APIs
- Simplified Master Bot β better tenant management interface
- Telegram Mini App β additional management features through Telegram Mini App
Try It Out
I built Coreness because I needed it for my own projects, and I'm betting others face the same problems. If you're tired of rebuilding bot infrastructure or paying for limited SaaS solutions, give it a try.
GitHub: https://github.com/Vensus137/Coreness
Documentation: docs.coreness.tech
Contact me: @vensus137
β Star the repo if you find it useful
π¬ Open issues or PRs β all feedback helps improve the project
π’ Share with others who might benefit
I'd love to hear your thoughts and contributions! π
Coreness β Create. Automate. Scale.

Top comments (0)