<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: PersonymAi</title>
    <description>The latest articles on DEV Community by PersonymAi (@personymai).</description>
    <link>https://dev.to/personymai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/personymai"/>
    <language>en</language>
    <item>
      <title>Transparent Moderation: We Now Show Why We Ban</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Sat, 11 Apr 2026 16:39:40 +0000</pubDate>
      <link>https://dev.to/personymai/transparent-moderation-we-now-show-why-we-ban-5c90</link>
      <guid>https://dev.to/personymai/transparent-moderation-we-now-show-why-we-ban-5c90</guid>
      <description>&lt;p&gt;Today we rolled out a significant improvement to PersonymAi Moderator.&lt;br&gt;
Every time a user is banned, the system now displays a detailed banner containing:&lt;br&gt;
•  The user who was banned&lt;br&gt;
•  The exact reason for the ban&lt;br&gt;
•  Spam Score (0–100%)&lt;br&gt;
The message itself automatically disappears after 60 seconds to keep the chat clean, while the full information remains visible to admins.&lt;br&gt;
This gives administrators complete clarity into the moderation logic without compromising chat cleanliness.&lt;br&gt;
No more guessing. No black boxes. Just transparent, explainable moderation.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbvuv9bcl30u6mqbltav.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbvuv9bcl30u6mqbltav.jpeg" alt=" " width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Transparent Moderation: We Now Show Why We Ban</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Sat, 11 Apr 2026 16:33:53 +0000</pubDate>
      <link>https://dev.to/personymai/transparent-moderation-we-now-show-why-we-ban-20bb</link>
      <guid>https://dev.to/personymai/transparent-moderation-we-now-show-why-we-ban-20bb</guid>
      <description></description>
    </item>
    <item>
      <title>How We Detect Reaction Spam in Telegram Using Behavioral Scoring</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Tue, 07 Apr 2026 14:19:41 +0000</pubDate>
      <link>https://dev.to/personymai/how-we-detect-reaction-spam-in-telegram-using-behavioral-scoring-5158</link>
      <guid>https://dev.to/personymai/how-we-detect-reaction-spam-in-telegram-using-behavioral-scoring-5158</guid>
      <description>&lt;p&gt;Most Telegram anti-spam bots are built around one assumption: spammers&lt;br&gt;
write messages. So we match text against patterns, run it through NLP,&lt;br&gt;
check for suspicious links. But what happens when the spammer sends no&lt;br&gt;
text at all?&lt;/p&gt;

&lt;p&gt;Reaction spam is exactly that. A bot joins your group silently, then&lt;br&gt;
floods every post with 🤡, 18+, and gambling emojis — harming your&lt;br&gt;
channel's reputation without triggering a single keyword filter.&lt;/p&gt;

&lt;p&gt;Our approach at ModerAI: instead of analyzing message content, we score&lt;br&gt;
behavioral signals. Things like — does this user react but never comment?&lt;br&gt;
Are they reacting from outside the group via channel post comments? Does&lt;br&gt;
their bio contain obfuscated text patterns? Each signal contributes to a&lt;br&gt;
spam probability score. Cross a threshold — you get restricted.&lt;br&gt;
No text needed.&lt;/p&gt;

&lt;p&gt;What behavioral signals have you found most reliable for detecting&lt;br&gt;
non-text spam in group chats?&lt;/p&gt;

</description>
      <category>telegram</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>antispam</category>
    </item>
    <item>
      <title>Building a Product You Can Never Demo Publicly</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Mon, 06 Apr 2026 17:49:37 +0000</pubDate>
      <link>https://dev.to/personymai/building-a-product-you-can-never-demo-publicly-2048</link>
      <guid>https://dev.to/personymai/building-a-product-you-can-never-demo-publicly-2048</guid>
      <description>&lt;p&gt;What happens when your product's core value proposition requires absolute secrecy about who uses it?&lt;/p&gt;

&lt;p&gt;At PersonymAI, we built an AI system that generates natural Telegram comments using 1,000+ unique personas — each with distinct writing styles, opinions, and behavioral patterns. The result? Neither marketers nor advertisers can tell it apart from real conversation.&lt;/p&gt;

&lt;p&gt;Our clients use this to maintain active-looking communities and sell advertising. But this creates a fundamental marketing paradox: showing a real client channel as a case study would immediately undermine the product's value for that client.&lt;/p&gt;

&lt;p&gt;We chose NDA over growth metrics. Every client is protected. No public channels, no named testimonials, no before/after reveals.&lt;/p&gt;

&lt;p&gt;How do you market a product that works best when nobody knows it exists? Curious to hear how others in the AI space handle similar transparency trade-offs.&lt;/p&gt;

</description>
      <category>saas</category>
      <category>ai</category>
      <category>telegram</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Implementing 3-Tier Moderation for Telegram Bots</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Tue, 31 Mar 2026 14:23:04 +0000</pubDate>
      <link>https://dev.to/personymai/implementing-3-tier-moderation-for-telegram-bots-1ne4</link>
      <guid>https://dev.to/personymai/implementing-3-tier-moderation-for-telegram-bots-1ne4</guid>
      <description>&lt;p&gt;Binary spam detection (spam or not spam) breaks down in active communities. A forwarded giveaway could be spam or a legitimate user sharing excitement. A message saying "write me" could be a scam CTA or an angry user. We rebuilt our Telegram moderation pipeline into three action tiers using AI for intent classification: tier 1 (ban) for clear spam with profit intent, tier 2 (mute + admin buttons) for ambiguous cases, and tier 3 (3-strike warnings) for links and forwards. The system also auto-detects chat language from linked channel posts using character-set heuristics (їєґ → Ukrainian, ыэъ → Russian). How do you handle the gray area between spam and legitimate messages in your moderation systems?&lt;/p&gt;

</description>
      <category>telegram</category>
      <category>python</category>
      <category>ai</category>
      <category>devdiscuss</category>
    </item>
    <item>
      <title>Connecting a Context-Aware Telegram Moderation Bot in 5 Steps</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Mon, 30 Mar 2026 10:00:11 +0000</pubDate>
      <link>https://dev.to/personymai/connecting-a-context-aware-telegram-moderation-bot-in-5-steps-33je</link>
      <guid>https://dev.to/personymai/connecting-a-context-aware-telegram-moderation-bot-in-5-steps-33je</guid>
      <description>&lt;p&gt;Most Telegram moderation bots run on keyword blocklists — easy to bypass, high false-positive rate, zero context awareness. ModerAI takes a different approach: you describe your group's topic in plain text, and the NLP pipeline uses that as a context window when classifying messages. The 15-layer AI stack handles the rest — no rule-building, no regex. How does a natural-language topic description feed into a spam classification pipeline at scale?&lt;/p&gt;

</description>
      <category>telegram</category>
      <category>automation</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Building a Fairer Anti-Spam System: How We Handle Links, Warnings, and New Chats</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Sat, 28 Mar 2026 10:54:40 +0000</pubDate>
      <link>https://dev.to/personymai/building-a-fairer-anti-spam-system-how-we-handle-links-warnings-and-new-chats-1icn</link>
      <guid>https://dev.to/personymai/building-a-fairer-anti-spam-system-how-we-handle-links-warnings-and-new-chats-1icn</guid>
      <description>&lt;p&gt;just shipped three changes to our Telegram anti-spam bot (ModerAI) that fundamentally change how we handle edge cases. Here's what we built and why.&lt;/p&gt;

&lt;p&gt;The Problem With Binary Decisions&lt;br&gt;
Most anti-spam bots make binary decisions: spam or not spam. Ban or allow.&lt;/p&gt;

&lt;p&gt;This creates two failure modes:&lt;/p&gt;

&lt;p&gt;False positives — legitimate users banned for having a link in their bio&lt;br&gt;
False negatives — spammers who learn the rules and work around them&lt;br&gt;
We needed a middle ground.&lt;/p&gt;

&lt;p&gt;Change 1: Contextual Bio Link Analysis&lt;/p&gt;

&lt;p&gt;Before:&lt;br&gt;
if "t.me/" in user.bio:&lt;br&gt;
    ban(user)  # crude but effective... and unfair&lt;/p&gt;

&lt;p&gt;After:&lt;br&gt;
link_target = analyze_link_context(user.bio)&lt;br&gt;
if link_target.category in ["spam_channel", "scam", "adult"]:&lt;br&gt;
    ban(user)&lt;br&gt;
elif link_target.category in ["game_referral", "personal_channel", "community"]:&lt;br&gt;
    allow(user)  # legitimate use case&lt;/p&gt;

&lt;p&gt;AI analyzes what the link actually points to. A Hamster Kombat referral? Fine. A channel selling "guaranteed 500% returns"? Ban.&lt;/p&gt;

&lt;p&gt;Change 2: Progressive Warning System&lt;br&gt;
Instead of ban-on-first-offense, we implemented a 3-strike system:&lt;/p&gt;

&lt;p&gt;Strike 1: delete message + warn ("у вас ещё 2 попытки")&lt;br&gt;
Strike 2: delete message + warn ("у вас ещё 1 попытка")&lt;br&gt;
Strike 3: ban&lt;/p&gt;

&lt;p&gt;Exception: edited message → instant ban (no strikes)&lt;/p&gt;

&lt;p&gt;The edit detection is key. Spammers who post "Hello everyone!" then edit to a scam link 5 minutes later get zero warnings. This pattern is always intentional.&lt;/p&gt;

&lt;p&gt;Change 3: Fresh Chat Grace Period&lt;br&gt;
When ModerAI connects to a new chat, it has zero context. Every user is "unknown."&lt;/p&gt;

&lt;p&gt;Aggressive bio scoring on day 1 would ban half the existing members. So we added a 48-hour grace period:&lt;/p&gt;

&lt;p&gt;chat_age = now() - chat.connected_at&lt;/p&gt;

&lt;p&gt;if chat_age &amp;lt; 48_hours:&lt;br&gt;
    # Relaxed mode: skip suspicious bio scoring&lt;br&gt;
    # Still ban critical threats (adult, drugs, obvious scam)&lt;br&gt;
    if threat_level == "critical":&lt;br&gt;
        ban(user)&lt;br&gt;
    else:&lt;br&gt;
        allow(user)  # gather data first&lt;br&gt;
else:&lt;br&gt;
    # Normal mode: full scoring pipeline&lt;br&gt;
    run_full_analysis(user)&lt;br&gt;
After 48 hours, the bot has enough context to make accurate decisions.&lt;/p&gt;

&lt;p&gt;Results&lt;br&gt;
These changes reduced false positive rate from ~0.3% to ~0.1% while maintaining 99.7% spam detection.&lt;/p&gt;

&lt;p&gt;The key insight: fairness and accuracy aren't opposites. A system that gives legitimate users the benefit of the doubt can still be ruthless with actual spammers — you just need smarter decision-making, not stricter rules.&lt;/p&gt;

&lt;p&gt;ModerAI: $9/month per chat. 7-day free trial.&lt;/p&gt;

&lt;p&gt;→ personym-ai.com/moderator-ai&lt;/p&gt;

&lt;p&gt;Questions about the implementation? Happy to discuss in the comments.&lt;/p&gt;

</description>
      <category>telegram</category>
      <category>ai</category>
      <category>antispam</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How We Built Voice and Image Spam Detection for Telegram (Technical Deep Dive)</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:59:39 +0000</pubDate>
      <link>https://dev.to/personymai/how-we-built-voice-and-image-spam-detection-for-telegram-technical-deep-dive-5051</link>
      <guid>https://dev.to/personymai/how-we-built-voice-and-image-spam-detection-for-telegram-technical-deep-dive-5051</guid>
      <description>&lt;p&gt;Last month we shipped three features that no other Telegram anti-spam bot has: voice message analysis, image spam detection, and anti-masking intelligence. Here's how we built them.&lt;/p&gt;

&lt;p&gt;The Problem&lt;br&gt;
Spammers evolved. Our text-based pipeline was catching 99.7% of text spam. So spammers stopped using text.&lt;/p&gt;

&lt;p&gt;Voice messages with gambling/scam ads&lt;br&gt;
Images with overlaid promotional text&lt;br&gt;
Text with emoji inserted between every character&lt;br&gt;
Traditional keyword and even AI text analysis is blind to all three.&lt;/p&gt;

&lt;p&gt;Voice Message Pipeline&lt;br&gt;
Architecture:&lt;/p&gt;

&lt;p&gt;Voice message received&lt;br&gt;
→ Download .ogg file from Telegram API&lt;br&gt;
→ Transcribe (speech-to-text)&lt;br&gt;
→ Feed transcript into existing anti-spam pipeline&lt;br&gt;
→ Same AI context analysis as text messages&lt;br&gt;
→ Decision: ban / warn / allow&lt;/p&gt;

&lt;p&gt;Key decisions:&lt;/p&gt;

&lt;p&gt;We transcribe everything under 5 minutes (covers 99% of spam voice notes)&lt;br&gt;
Transcription runs async — doesn't block the moderation pipeline&lt;br&gt;
The transcript gets the same 8-layer analysis as text: whitelist → global ban → reputation → trust → fingerprint → rules → AI context → decision&lt;br&gt;
Language detection handles Russian, Ukrainian, and English voice messages&lt;br&gt;
The result: a 15-second voice note saying "free betting tips, guaranteed profit" gets transcribed, classified as gambling spam, and the user gets banned — all within 3-5 seconds.&lt;/p&gt;

&lt;p&gt;Image Spam Detection&lt;br&gt;
Architecture:&lt;/p&gt;

&lt;p&gt;Image received (photo or document)&lt;br&gt;
→ Send to Vision AI&lt;br&gt;
→ Analyze: is there promotional/spam text in the image?&lt;br&gt;
→ If spam detected → classify category → ban&lt;br&gt;
→ If clean → allow&lt;/p&gt;

&lt;p&gt;What Vision AI catches:&lt;/p&gt;

&lt;p&gt;Screenshots of fake profit/portfolio charts with channel links&lt;br&gt;
Photos with overlaid text advertising gambling/crypto&lt;br&gt;
Casino/betting ad graphics&lt;br&gt;
Profile avatars that are literally advertisements&lt;br&gt;
We only trigger Vision analysis on images from untrusted users (trust score below threshold). Trusted members' images pass through without Vision analysis — saves cost and reduces latency.&lt;/p&gt;

&lt;p&gt;Anti-Masking&lt;br&gt;
Spammers discovered they could bypass keyword filters with tricks like:&lt;/p&gt;

&lt;p&gt;З🎰а🎰р🎰а🎰б🎰о🎰т🎰о🎰к (emoji between letters)&lt;br&gt;
3аработок (number 3 instead of letter З)&lt;br&gt;
Зaрaботок (Latin 'a' instead of Cyrillic 'а')&lt;br&gt;
Our approach:&lt;/p&gt;

&lt;p&gt;Raw message text&lt;br&gt;
→ Strip emoji and special characters&lt;br&gt;
→ Normalize Unicode (Cyrillic/Latin homoglyphs)&lt;br&gt;
→ Normalize number→letter substitutions&lt;br&gt;
→ Feed cleaned text into AI analysis&lt;br&gt;
→ AI evaluates meaning, not characters&lt;/p&gt;

&lt;p&gt;The AI layer is the key — even after normalization, context matters. "Заработок" in a freelance group is normal. "Заработок" in a cooking group is spam. Same word, different context, different decision.&lt;/p&gt;

&lt;p&gt;Updated Pipeline (10 layers)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Whitelist check           → free, 0ms&lt;/li&gt;
&lt;li&gt;Global ban check          → free, 0ms&lt;/li&gt;
&lt;li&gt;Reputation auto-ban       → free, 0ms&lt;/li&gt;
&lt;li&gt;Trust system check        → free, 0ms&lt;/li&gt;
&lt;li&gt;Anti-masking normalization → free, 1ms&lt;/li&gt;
&lt;li&gt;Fingerprint matching      → free, 1ms&lt;/li&gt;
&lt;li&gt;Rule-based detection      → free, 0ms&lt;/li&gt;
&lt;li&gt;Voice transcription       → if voice message&lt;/li&gt;
&lt;li&gt;Vision AI analysis        → if image from untrusted user&lt;/li&gt;
&lt;li&gt;AI context analysis      → for edge cases&lt;/li&gt;
&lt;li&gt;Decision                 → ban / mute / allow&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Cheapest checks first. AI and Vision only for what cheaper layers can't decide.&lt;/p&gt;

&lt;p&gt;Results&lt;br&gt;
Voice spam: from 0% detection → ~95% detection&lt;br&gt;
Image spam: from 0% detection → ~90% detection&lt;br&gt;
Masked text: from ~60% → ~95% detection&lt;br&gt;
Overall accuracy: maintained at 99.7%&lt;br&gt;
False positive rate: still near zero&lt;br&gt;
What's Next&lt;br&gt;
Video message analysis is on the roadmap. Spammers will try video next — we'll be ready.&lt;/p&gt;

&lt;p&gt;→ personym-ai.com/moderator-ai&lt;br&gt;
→ Try free for 7 days&lt;/p&gt;

</description>
      <category>telegram</category>
      <category>ai</category>
      <category>computervision</category>
      <category>whisper</category>
    </item>
    <item>
      <title>How We Increased Telegram Channel Comments by 340% in 2 Weeks (Real Examples)</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Thu, 26 Mar 2026 08:34:40 +0000</pubDate>
      <link>https://dev.to/personymai/how-we-increased-telegram-channel-comments-by-340-in-2-weeks-real-examples-389l</link>
      <guid>https://dev.to/personymai/how-we-increased-telegram-channel-comments-by-340-in-2-weeks-real-examples-389l</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Every Telegram admin knows the feeling. You spend hours creating a quality post. You publish it. And then... silence. Zero comments. Maybe one emoji reaction.&lt;/p&gt;

&lt;p&gt;Your subscribers see the silence too. And they leave.&lt;/p&gt;

&lt;p&gt;This is the cold-start problem — and it kills more Telegram channels than bad content ever will.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Death Spiral
&lt;/h3&gt;

&lt;p&gt;Here's what actually happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;New post → 0 comments&lt;/li&gt;
&lt;li&gt;Subscribers see empty comment section → "dead channel"&lt;/li&gt;
&lt;li&gt;They don't engage → less reason for others to engage&lt;/li&gt;
&lt;li&gt;Growth stalls → subscribers slowly leave&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We've seen channels with 10,000+ subscribers averaging 0-2 comments per post. The content was great. The engagement was dead.&lt;/p&gt;
&lt;h3&gt;
  
  
  What We Tested
&lt;/h3&gt;

&lt;p&gt;We connected PersonymAI to 5 channels in different niches (crypto, news, tech, motivation, entertainment) and tracked results for 2 weeks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup took 10 minutes per channel:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connected the bot&lt;/li&gt;
&lt;li&gt;Selected persona types that match the niche&lt;/li&gt;
&lt;li&gt;Set comment frequency and timing&lt;/li&gt;
&lt;li&gt;Done&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  The Results (2 weeks)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Change&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Avg. comments per post&lt;/td&gt;
&lt;td&gt;1.3&lt;/td&gt;
&lt;td&gt;5.7&lt;/td&gt;
&lt;td&gt;+340%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Unique real commenters per week&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;td&gt;+325%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Avg. time to first comment&lt;/td&gt;
&lt;td&gt;4+ hours&lt;/td&gt;
&lt;td&gt;2 minutes&lt;/td&gt;
&lt;td&gt;-99%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Subscriber retention rate&lt;/td&gt;
&lt;td&gt;82%&lt;/td&gt;
&lt;td&gt;94%&lt;/td&gt;
&lt;td&gt;+12%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  Why It Works
&lt;/h3&gt;

&lt;p&gt;The key insight: &lt;strong&gt;real people comment more when they see others commenting.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nobody wants to be the first commenter on a silent post. But when there's already a discussion happening — arguments, jokes, reactions — people naturally want to join in.&lt;/p&gt;

&lt;p&gt;That's exactly what our AI personas do. They don't post "Great content! 👍". They have actual opinions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Crypto channel example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persona "Skeptic": "BTC at 95K and people are still calling for 150K? Show me the on-chain data."&lt;/li&gt;
&lt;li&gt;Persona "Degen": "95K is nothing, we're going to 200K easy 🚀🚀🚀"&lt;/li&gt;
&lt;li&gt;Persona "Analyst": "Support at 92.5K looks solid. If we hold above 94K on the daily close, 100K is realistic."&lt;/li&gt;
&lt;li&gt;Real subscriber joins: "Analyst is right, the RSI is showing..."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;That last line is the magic.&lt;/strong&gt; A real person saw a discussion and wanted to participate. That would never happen under a silent post.&lt;/p&gt;
&lt;h3&gt;
  
  
  What Makes Personas Feel Real
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Each persona has a consistent personality that evolves over time&lt;/li&gt;
&lt;li&gt;They argue with each other (not just agree)&lt;/li&gt;
&lt;li&gt;35%+ of comments are short (1-5 words) like real chat&lt;/li&gt;
&lt;li&gt;They use stickers, GIFs, and reactions&lt;/li&gt;
&lt;li&gt;They reply in threads, not just flat comments&lt;/li&gt;
&lt;li&gt;They reference the actual post content, not generic reactions&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  The Unexpected Effect
&lt;/h3&gt;

&lt;p&gt;After about a week, something interesting happened across all 5 channels: &lt;strong&gt;organic comments started growing on their own.&lt;/strong&gt; Even on posts where AI personas hadn't commented yet, real subscribers were more active than before.&lt;/p&gt;

&lt;p&gt;Why? Because the channel no longer &lt;em&gt;felt&lt;/em&gt; dead. Subscribers had been retrained to expect discussions. The culture changed from "lurking" to "participating."&lt;/p&gt;
&lt;h3&gt;
  
  
  Niche-Specific Results
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Niche&lt;/th&gt;
&lt;th&gt;Best persona types&lt;/th&gt;
&lt;th&gt;Avg. organic comment growth&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Crypto/Trading&lt;/td&gt;
&lt;td&gt;Analyst + Degen + Skeptic&lt;/td&gt;
&lt;td&gt;+420%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;News&lt;/td&gt;
&lt;td&gt;Hot-take + Moderate + Cynic&lt;/td&gt;
&lt;td&gt;+280%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tech&lt;/td&gt;
&lt;td&gt;Builder + Critic + Enthusiast&lt;/td&gt;
&lt;td&gt;+310%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Motivation&lt;/td&gt;
&lt;td&gt;Supporter + Realist + Joker&lt;/td&gt;
&lt;td&gt;+190%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Entertainment&lt;/td&gt;
&lt;td&gt;Memer + Troll + Fan&lt;/td&gt;
&lt;td&gt;+380%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Crypto performed best because the audience is already opinionated — they just needed a spark.&lt;/p&gt;
&lt;h3&gt;
  
  
  For Admins Who Also Deal With Spam
&lt;/h3&gt;

&lt;p&gt;Half of these channels also had a spam problem. While testing the comment system, we also ran ModerAI anti-spam on the groups.&lt;/p&gt;

&lt;p&gt;Results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;99.7% spam caught&lt;/li&gt;
&lt;li&gt;0 false bans in 2 weeks&lt;/li&gt;
&lt;li&gt;Admins spent zero time on moderation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ModerAI has a 7-day free trial — connect it and forget about spam.&lt;/p&gt;
&lt;h3&gt;
  
  
  How To Try It
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AI Comments:&lt;/strong&gt; Check plans at &lt;a href="https://personym-ai.com" rel="noopener noreferrer"&gt;personym-ai.com&lt;/a&gt; — connect your channel and see first comments within minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ModerAI Anti-Spam:&lt;/strong&gt; &lt;a href="https://personym-ai.com/moderator-ai" rel="noopener noreferrer"&gt;personym-ai.com/moderator-ai&lt;/a&gt; — 7-day free trial, no credit card.&lt;/p&gt;

&lt;p&gt;If you're an admin who cares about your community — you know the difference between a dead channel and a living one. We help you make that switch.&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://personym-ai.com" rel="noopener noreferrer"&gt;personym-ai.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>telegram</category>
      <category>marketing</category>
      <category>startup</category>
      <category>ai</category>
    </item>
    <item>
      <title>From Stealth to Launch: How We Built AI Tools for Telegram (and Why We're Going Public Today)</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Thu, 26 Mar 2026 07:48:34 +0000</pubDate>
      <link>https://dev.to/personymai/from-stealth-to-launch-how-we-built-ai-tools-for-telegram-and-why-were-going-public-today-39h2</link>
      <guid>https://dev.to/personymai/from-stealth-to-launch-how-we-built-ai-tools-for-telegram-and-why-were-going-public-today-39h2</guid>
      <description>&lt;p&gt;For over a year, our company didn't have a public website, a Twitter account, or a Product Hunt page. We were building AI tools for Telegram under NDA with international clients.&lt;/p&gt;

&lt;p&gt;Today we're launching publicly. Here's what we learned.&lt;/p&gt;

&lt;p&gt;The Technical Challenge&lt;br&gt;
We built two products:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI Comment System — generates natural Telegram discussions using 1,000+ persistent AI personas.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The hard part wasn't generating text. It was making it feel human. We solved this with:&lt;/p&gt;

&lt;p&gt;Multi-pass quality pipeline (generate → self-check → enforce style)&lt;br&gt;
Opinion Drift — personas change views gradually over time&lt;br&gt;
65-85% threaded replies (real conversations, not flat comments)&lt;br&gt;
35%+ short comments (1-5 words, like real chat)&lt;br&gt;
Typing emulation and post-reading behavior&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ModerAI Anti-Spam — context-aware spam detection for Telegram groups.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Traditional bots use keyword matching. We use an 8-layer pipeline:&lt;/p&gt;

&lt;p&gt;Whitelist/ban check (free, instant)&lt;br&gt;
Reputation scoring (3+ bans = auto-ban)&lt;br&gt;
Trust system (skips 90-95% of legit messages)&lt;br&gt;
Fuzzy text fingerprinting&lt;br&gt;
39+ rule-based patterns&lt;br&gt;
AI context analysis (only for edge cases)&lt;br&gt;
Final decision&lt;br&gt;
Result: 99.7% accuracy, near-zero false positives.&lt;/p&gt;

&lt;p&gt;What We Learned&lt;br&gt;
NDA work is great for building, terrible for marketing. We had a proven product and zero public presence. Starting from scratch on SEO, social, and community after a year of silence is harder than building the tech.&lt;/p&gt;

&lt;p&gt;Keyword-based spam detection is fundamentally broken. The same word means different things in different contexts. AI context analysis isn't just better — it's a different category.&lt;/p&gt;

&lt;p&gt;Persistence matters more than personality in AI personas. The biggest unlock wasn't making each persona unique — it was making them consistent over time.&lt;/p&gt;

&lt;p&gt;Launch Day&lt;br&gt;
We're live on Product Hunt today: &lt;a href="https://www.producthunt.com/posts/personymai/maker-invite?code=wMwcDO" rel="noopener noreferrer"&gt;https://www.producthunt.com/posts/personymai/maker-invite?code=wMwcDO&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're interested in Telegram tooling, AI-powered content generation, or anti-spam systems, I'm happy to go deeper on any of these topics.&lt;/p&gt;

&lt;p&gt;→ personym-ai.com&lt;/p&gt;

</description>
      <category>ai</category>
      <category>telegram</category>
      <category>startup</category>
      <category>producthunt</category>
    </item>
    <item>
      <title>Lessons From Processing Millions of Telegram Messages: What We Learned About Spam</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Wed, 25 Mar 2026 10:18:09 +0000</pubDate>
      <link>https://dev.to/personymai/lessons-from-processing-millions-of-telegram-messages-what-we-learned-about-spam-5g86</link>
      <guid>https://dev.to/personymai/lessons-from-processing-millions-of-telegram-messages-what-we-learned-about-spam-5g86</guid>
      <description>&lt;p&gt;We've spent years building an AI anti-spam system for Telegram. After processing millions of messages across hundreds of communities, here are the patterns we discovered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Spam Has Evolved
&lt;/h2&gt;

&lt;p&gt;Forget the obvious stuff — links, ALL CAPS, "CLICK HERE FOR FREE MONEY."&lt;/p&gt;

&lt;p&gt;Modern Telegram spammers are sophisticated:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Edit Trick&lt;/strong&gt;&lt;br&gt;
Post a normal message. Wait an hour. Edit it into a scam link. Most bots never re-check edited messages. We learned this the hard way and built edit monitoring into our core pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Trust Builder&lt;/strong&gt;&lt;br&gt;
Join a group. Post 5-10 normal messages over a few days. Build credibility. Then drop the spam. Keyword filters can't catch this because the spam message itself might look innocent — it's the pattern that's suspicious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Avatar Bait&lt;/strong&gt;&lt;br&gt;
Create accounts with provocative profile photos. Join groups. Post nothing — the avatar itself is the spam (drives clicks to the profile with links in bio). This requires pre-message analysis that most bots don't do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Multi-Account Wave&lt;/strong&gt;&lt;br&gt;
Hit a group with 20 different accounts in 5 minutes. Even if the admin bans them, the damage is done — members saw the spam. Speed of response matters more than accuracy here.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Keyword Filtering Gets Wrong
&lt;/h2&gt;

&lt;p&gt;We analyzed false positive rates across traditional moderation bots. The results were painful:&lt;/p&gt;

&lt;p&gt;The word "investment" triggers bans in 73% of keyword-based bots. But in crypto and trading groups, it's used in normal conversation hundreds of times per day.&lt;/p&gt;

&lt;p&gt;"Free" is flagged by 61% of bots. But "free trial", "free tier", and "free update" are perfectly legitimate.&lt;/p&gt;

&lt;p&gt;The fundamental problem: &lt;strong&gt;context determines whether a message is spam, not individual words.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Global Network Effect
&lt;/h2&gt;

&lt;p&gt;Our biggest insight came from connecting multiple chats into a shared ban network.&lt;/p&gt;

&lt;p&gt;When Chat A bans a spammer, Chats B through Z know about it instantly. The spammer can't just move to the next group.&lt;/p&gt;

&lt;p&gt;After connecting 100+ chats:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New spam accounts were blocked on first appearance in 89% of cases&lt;/li&gt;
&lt;li&gt;The average time to neutralize a spam wave dropped from 15 minutes to under 30 seconds&lt;/li&gt;
&lt;li&gt;False positive rate decreased as more data flowed through the network&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The network gets smarter with every chat added. It's not linear growth — it's exponential.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust Is Better Than Rules
&lt;/h2&gt;

&lt;p&gt;Early versions of our system were too aggressive. We caught spam, but we also annoyed legitimate users.&lt;/p&gt;

&lt;p&gt;The breakthrough was shifting from "block suspicious behavior" to "build and track trust."&lt;/p&gt;

&lt;p&gt;Every user in our system has a trust score based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Message count and quality&lt;/li&gt;
&lt;li&gt;Behavior consistency over time&lt;/li&gt;
&lt;li&gt;Reputation across the network&lt;/li&gt;
&lt;li&gt;Account age and profile completeness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;High-trust users are never bothered. New users get gradually more freedom. Spammers never build enough trust to bypass the system.&lt;/p&gt;

&lt;p&gt;This reduced false positives to near zero while maintaining 99.7% detection accuracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fingerprinting Beyond Accounts
&lt;/h2&gt;

&lt;p&gt;Banning an account is easy. Banning a person is hard.&lt;/p&gt;

&lt;p&gt;Spammers create new accounts constantly. But their behavior patterns are remarkably consistent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Message timing intervals&lt;/li&gt;
&lt;li&gt;Text structure and formatting habits&lt;/li&gt;
&lt;li&gt;Target selection patterns&lt;/li&gt;
&lt;li&gt;Time-of-day activity profiles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our fingerprint system identifies these patterns even across completely new accounts. It doesn't matter if the username and phone number are different — the behavior signature matches.&lt;/p&gt;

&lt;h2&gt;
  
  
  Numbers After Years of Production
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;99.7%&lt;/strong&gt; spam detection accuracy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;~0%&lt;/strong&gt; false positive rate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sub-second&lt;/strong&gt; average decision time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Millions&lt;/strong&gt; of messages analyzed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hundreds&lt;/strong&gt; of active communities protected&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Spam evolves constantly. We're working on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Voice message spam detection&lt;/li&gt;
&lt;li&gt;Image and media content analysis&lt;/li&gt;
&lt;li&gt;Predictive blocking (identifying potential spammers before they act)&lt;/li&gt;
&lt;li&gt;Cross-platform intelligence sharing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The arms race never ends, but with AI that understands context and a network that shares intelligence, the defenders finally have the advantage.&lt;/p&gt;




&lt;p&gt;ModerAI is part of PersonymAI. If you manage Telegram communities and want to test it: &lt;a href="https://personym-ai.com" rel="noopener noreferrer"&gt;personym-ai.com&lt;/a&gt; — 7 days free.&lt;/p&gt;

&lt;p&gt;Questions about our architecture or approach? Happy to discuss below.&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>telegram</category>
      <category>python</category>
    </item>
    <item>
      <title>Building AI Personas That Sound Human: Our Approach to Telegram Engagement</title>
      <dc:creator>PersonymAi</dc:creator>
      <pubDate>Wed, 25 Mar 2026 10:15:34 +0000</pubDate>
      <link>https://dev.to/personymai/building-ai-personas-that-sound-human-our-approach-to-telegram-engagement-157k</link>
      <guid>https://dev.to/personymai/building-ai-personas-that-sound-human-our-approach-to-telegram-engagement-157k</guid>
      <description>&lt;p&gt;Dead comment sections kill Telegram channels. You post great content — zero reactions. New subscribers see silence and leave.&lt;/p&gt;

&lt;p&gt;We solved this by building AI personas that engage like real people. Here's how.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem With Generic AI Comments
&lt;/h2&gt;

&lt;p&gt;Everyone has seen them: "Great post! Thanks for sharing! Very informative!"&lt;/p&gt;

&lt;p&gt;These comments are worse than no comments at all. They scream "bot" and destroy trust.&lt;/p&gt;

&lt;p&gt;Real Telegram chats look nothing like this. Real people write "lol", argue with each other, use slang, drop stickers, and type one-word reactions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Persona Architecture
&lt;/h2&gt;

&lt;p&gt;Every AI account in PersonymAI has a persistent identity:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Personality traits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing style — formal, casual, slang-heavy, emoji addict, minimalist&lt;/li&gt;
&lt;li&gt;Opinion pattern — bullish, bearish, contrarian, neutral&lt;/li&gt;
&lt;li&gt;Aggression — scale of 0 to 100 (polite analyst to aggressive degen)&lt;/li&gt;
&lt;li&gt;Language — strict Ukrainian, Russian, surzhyk, or mixed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Behavioral rules:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each account has a unique typing speed&lt;/li&gt;
&lt;li&gt;Some accounts comment early, others are late reactors&lt;/li&gt;
&lt;li&gt;Some reply more than they initiate&lt;/li&gt;
&lt;li&gt;Sticker usage varies (15% of comments include stickers)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two accounts never produce the same output for the same input. Ever.&lt;/p&gt;

&lt;h2&gt;
  
  
  Opinion Drift
&lt;/h2&gt;

&lt;p&gt;Static personas feel fake within a week. Real people change their minds.&lt;/p&gt;

&lt;p&gt;We implemented Opinion Drift — accounts gradually shift their positions over time. A bullish account won't suddenly turn bearish overnight. Instead, sentiment shifts slowly based on market conditions and community reactions.&lt;/p&gt;

&lt;p&gt;This creates realistic long-term behavior that's indistinguishable from organic users.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 3-Pass Quality Pipeline
&lt;/h2&gt;

&lt;p&gt;Raw AI output is never good enough. Our pipeline:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass 1: Generation&lt;/strong&gt;&lt;br&gt;
Our proprietary AI generates comments using the post content, channel niche, and persona profile. A 1400+ line prompt system handles persona injection and niche-specific terminology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass 2: Self-Check&lt;/strong&gt;&lt;br&gt;
Two groups of 5 validation rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Group A: topic relevance, persona consistency, language correctness&lt;/li&gt;
&lt;li&gt;Group B: length check, style verification, repetition detection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pass 3: Short Enforce&lt;/strong&gt;&lt;br&gt;
Real chats have lots of ultra-short messages. We enforce that 35%+ of comments are 1-5 words. Making AI write "lol" instead of a paragraph is surprisingly hard.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cleanup Layer
&lt;/h2&gt;

&lt;p&gt;Even after 3 passes, bot patterns leak through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unnecessary dashes and formal punctuation → removed&lt;/li&gt;
&lt;li&gt;English slang in Russian comments → caught and fixed&lt;/li&gt;
&lt;li&gt;ALL CAPS overuse → toned down&lt;/li&gt;
&lt;li&gt;"I've been in crypto since 2021" → classic AI pattern, blocked&lt;/li&gt;
&lt;li&gt;Analytical tone from a "degen" persona → detected and rewritten&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Threading: The Secret Sauce
&lt;/h2&gt;

&lt;p&gt;65-85% of our comments are threaded replies. Accounts don't just comment — they argue with each other, agree, joke, and create sub-discussions.&lt;/p&gt;

&lt;p&gt;A flat wall of independent comments looks artificial. A thread where someone says "you're wrong" and gets three different responses looks real.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-Time Context
&lt;/h2&gt;

&lt;p&gt;For crypto channels, our system integrates live market data — prices, volumes, 24h changes. Comments reference actual numbers and current events.&lt;/p&gt;

&lt;p&gt;"BTC just broke 67k" hits different than "the market is showing positive movement."&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Channels using our system report:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;300% increase in organic engagement within the first month&lt;/li&gt;
&lt;li&gt;40% higher subscriber retention&lt;/li&gt;
&lt;li&gt;Natural-looking discussions from day one&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The bridge between an empty channel and a thriving community is shorter than you think.&lt;/p&gt;




&lt;p&gt;We're always looking for feedback from the dev community. What would you want to see in an AI engagement system?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://personym-ai.com" rel="noopener noreferrer"&gt;PersonymAI&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>telegram</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
