The Problem
Every Telegram community admin deals with spam. Porn bots, scam links, fake airdrops — dozens every day.
Existing solutions like Combot and Rose Bot rely on keyword filtering. The result: real users get banned for saying "investment" in a trading group, while smart spammers bypass filters easily.
We spent years building something better.
Our Approach: Context Over Keywords
Instead of maintaining banned word lists, we built a proprietary AI engine that understands what a message actually means.
The same word can be spam in one chat and perfectly normal in another:
- "Easy money guaranteed" in a cooking group → spam
- "Easy money guaranteed" in a meme group → probably a joke
- "Check this investment" in a crypto group → normal discussion
Our AI understands these differences because it analyzes:
- Message content and intent
- Chat topic and culture
- User behavior history
- Profile metadata (avatar, bio, username patterns)
The Architecture
The system has several layers:
1. Pre-Message Analysis
Before a new user even sends their first message, we analyze:
- Profile photo (NSFW detection, stock photo patterns)
- Bio text (spam indicators, suspicious links)
- Username patterns (random strings, known spam formats)
- Account age and metadata
This catches 40% of spammers before they act.
2. Message Context Engine
Every message goes through our AI with full context:
- What is this chat about?
- What did this user say before?
- Is this message pattern common among spammers?
- Does the content match the chat's normal conversation style?
3. Edit Detection
A common spam tactic: post an innocent message, wait an hour, then edit it into a scam link. Our system re-analyzes every edited message. This trick doesn't work.
4. Global Ban Network
This is our key differentiator. When a spammer gets banned in any connected chat, that ban propagates to ALL other chats in the network automatically.
Think of it as collective immunity. Every chat that joins makes the entire network smarter and harder to penetrate.
5. Trust System
We don't just look for bad actors — we identify good ones.
Users build trust through:
- Number of normal messages sent
- Spam reputation score across the network
- Behavior consistency over time
Trusted users are never bothered by false positives. New users get gradually more freedom as they prove themselves.
6. Fingerprint System
Even when spammers create fresh accounts, their behavior patterns are recognizable. Same message timing, similar text structures, comparable targeting patterns.
Our fingerprint system identifies these patterns and blocks them proactively.
Results
After years of development and testing across hundreds of active communities:
- 99.7% spam detection accuracy
- Near-zero false positive rate
- Sub-second decision time
- Millions of messages processed
Technical Decisions That Mattered
Proprietary AI over third-party APIs
General-purpose language models don't understand Telegram culture, crypto slang, or community dynamics. We built our own engine specifically for this domain.
Global network over isolated instances
Most moderation bots treat each chat independently. Shared intelligence is exponentially more powerful.
Trust-based over rule-based
Instead of "block everything suspicious," we take the approach of "verify and gradually trust." This eliminates false positives.
Edit monitoring
No other Telegram moderation tool checks edited messages. This single feature catches an entire category of sophisticated spam.
What's Next
We're expanding beyond text — voice message analysis, image content understanding, and behavioral prediction models.
The spam arms race never ends, but with AI that actually understands context, we're winning.
ModerAI is part of the PersonymAI platform. Try it free for 7 days: personym-ai.com
If you're building something similar or have questions about the architecture, I'm happy to discuss in the comments.
Top comments (0)