<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sambodhi Singh</title>
    <description>The latest articles on DEV Community by Sambodhi Singh (@tentari).</description>
    <link>https://dev.to/tentari</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tentari"/>
    <language>en</language>
    <item>
      <title>Why CAPTCHAs today are so bad (and what we should be building instead)</title>
      <dc:creator>Sambodhi Singh</dc:creator>
      <pubDate>Sun, 14 Dec 2025 06:31:54 +0000</pubDate>
      <link>https://dev.to/tentari/why-captchas-today-are-so-bad-and-what-we-should-be-building-instead-3dm2</link>
      <guid>https://dev.to/tentari/why-captchas-today-are-so-bad-and-what-we-should-be-building-instead-3dm2</guid>
      <description>&lt;p&gt;Modern CAPTCHAs are meant to stop bots, but in reality they mostly punish humans. Clicking traffic lights, rotating images, or solving puzzles breaks UX, accessibility, and flow — while advanced bots often pass anyway.&lt;/p&gt;

&lt;p&gt;The core problem isn’t implementation. It’s the assumption that users are either “human” or “bot.” Real behavior is probabilistic. Timing, cadence, input entropy, device consistency, and trajectories over time all exist in shades of gray, not absolutes.&lt;/p&gt;

&lt;p&gt;Most CAPTCHA systems hide this uncertainty. But every security decision already depends on configuration: thresholds, confidence levels, and tolerance for risk. Two companies can run the same detection logic and behave completely differently — and that’s not a bug, it’s policy.&lt;/p&gt;

&lt;p&gt;I’ve been working on an experimental project called **&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bke4qx9rvhf243q3872.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bke4qx9rvhf243q3872.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;**, an invisible behavioral security system that doesn’t pretend to be perfect. Instead of blocking users aggressively, it applies progressive enforcement based on configurable risk tolerance. Detection admits uncertainty, UX degrades gradually, and behavior improves over time.&lt;/p&gt;

&lt;p&gt;I’m currently exploring white-label use cases and real-world feedback.&lt;/p&gt;

&lt;p&gt;If this idea interests you or you want to discuss behavioral security:&lt;br&gt;
Discord: pixelhollow&lt;/p&gt;

</description>
      <category>ux</category>
      <category>security</category>
      <category>a11y</category>
      <category>discuss</category>
    </item>
    <item>
      <title>CAPTCHAs punish humans while advanced bots pass. The problem isn’t puzzles — it’s pretending behavior is binary. REZO uses invisible, configurable behavioral signals and progressive enforcement instead.
Discord: pixelhollow</title>
      <dc:creator>Sambodhi Singh</dc:creator>
      <pubDate>Sun, 14 Dec 2025 06:16:53 +0000</pubDate>
      <link>https://dev.to/tentari/captchas-punish-humans-while-advanced-bots-pass-the-problem-isnt-puzzles-its-pretending-172p</link>
      <guid>https://dev.to/tentari/captchas-punish-humans-while-advanced-bots-pass-the-problem-isnt-puzzles-its-pretending-172p</guid>
      <description></description>
    </item>
  </channel>
</rss>
