<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Axit</title>
    <description>The latest articles on DEV Community by Axit (@axitslab).</description>
    <link>https://dev.to/axitslab</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/axitslab"/>
    <language>en</language>
    <item>
      <title>22 React Components That Don't Exist Anywhere Else</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Fri, 03 Apr 2026 11:19:50 +0000</pubDate>
      <link>https://dev.to/axitslab/22-react-components-that-dont-exist-anywhere-else-4c59</link>
      <guid>https://dev.to/axitslab/22-react-components-that-dont-exist-anywhere-else-4c59</guid>
      <description>&lt;p&gt;Buttons are boring. Modals are lazy. "Are you sure? [Yes] [No]" is a crime against UX.&lt;/p&gt;

&lt;p&gt;Here are 22 React components that make users &lt;strong&gt;feel&lt;/strong&gt; something. Every one of them is used in production on &lt;a href="https://aumiqx.com" rel="noopener noreferrer"&gt;aumiqx.com&lt;/a&gt;. Every one is open source. Every one is something you've never seen in a component library before.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Every CTA on the Web
&lt;/h2&gt;

&lt;p&gt;Every website uses the same interaction model: click a button, see a modal, confirm with another button. There's zero physical feedback. Zero intentionality. Zero proof that the user actually meant to do what they did.&lt;/p&gt;

&lt;p&gt;What if your "Delete Account" button required the user to &lt;strong&gt;lift a safety cover and flip a switch&lt;/strong&gt;? What if your payment confirmation needed a &lt;strong&gt;2-second hold&lt;/strong&gt; while a progress bar fills? What if your sign-up CTA was hidden behind &lt;strong&gt;particles you had to sweep aside&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;These aren't gimmicks. They're &lt;strong&gt;intentional friction&lt;/strong&gt; — interactions that scale with the importance of the action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gesture CTAs
&lt;/h2&gt;

&lt;p&gt;The star of the library. Six components that replace boring buttons with physical interactions.&lt;/p&gt;

&lt;h3&gt;
  
  
  SlideToUnlock
&lt;/h3&gt;

&lt;p&gt;Drag the handle to the end. Spring physics snap it back if released early. Like iOS unlock, but for any action.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;SlideToUnlock&lt;/span&gt;
  &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"slide to deploy"&lt;/span&gt;
  &lt;span class="na"&gt;onUnlock&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;deploy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt; High-intent CTAs where you want to prevent accidental triggers. Payment confirmations, destructive actions, important form submissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  HardwareSwitch
&lt;/h3&gt;

&lt;p&gt;Two-step confirmation: first lift the safety cover with CSS 3D transforms, then toggle the switch underneath. Two deliberate actions. No accidental triggers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;HardwareSwitch&lt;/span&gt;
  &lt;span class="na"&gt;onActivate&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;arm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"launch sequence"&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt; Nuclear-level confirmations. Database deletions, production deployments, account termination. The 2-step interaction proves the user is serious.&lt;/p&gt;

&lt;h3&gt;
  
  
  HoldToConfirm
&lt;/h3&gt;

&lt;p&gt;Press and hold for 2 seconds. A progress bar fills the button. Release early and it resets. No modals, no extra clicks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;HoldToConfirm&lt;/span&gt;
  &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;2000&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;onConfirm&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt; Replace every "Are you sure?" modal on your site. The hold duration IS the confirmation. Time-based friction that scales with importance.&lt;/p&gt;

&lt;h3&gt;
  
  
  LeverCTA
&lt;/h3&gt;

&lt;p&gt;A tension wire. Pull it down. Watch it stretch. Keep going until it snaps. The snap triggers the action.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LeverCTA&lt;/span&gt;
  &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"pull to confirm"&lt;/span&gt;
  &lt;span class="na"&gt;onSnap&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;confirm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;threshold&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;120&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt; Forms, subscriptions, any action where you want the user to feel the weight of their decision. Satisfying in a way buttons never are.&lt;/p&gt;

&lt;h3&gt;
  
  
  ResonanceCTA
&lt;/h3&gt;

&lt;p&gt;The screen is noise. Move your mouse in circles. The noise clears. The CTA appears underneath. Users who find it earned it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ResonanceCTA&lt;/span&gt;
  &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"you found it."&lt;/span&gt;
  &lt;span class="na"&gt;onReveal&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;router&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/contact&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt; Landing page hero sections where you want to reward curious users. The interaction itself filters for engaged visitors.&lt;/p&gt;

&lt;h3&gt;
  
  
  AlchemistCTA
&lt;/h3&gt;

&lt;p&gt;A field of particles. Push them aside with your cursor. Under the dust: a hidden call-to-action. Like clearing fog to find treasure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;AlchemistCTA&lt;/span&gt;
  &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"start building"&lt;/span&gt;
  &lt;span class="na"&gt;particleCount&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;onReveal&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;revealed.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt; Hero sections, portfolio sites, product reveals. The particle interaction creates a sense of discovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layout &amp;amp; Glass
&lt;/h2&gt;

&lt;h3&gt;
  
  
  GlassCard
&lt;/h3&gt;

&lt;p&gt;Hover it. Watch it lift with spring physics. The gradient overlay catches light like actual glass. We use this for every card on aumiqx.com.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;GlassCard&lt;/span&gt; &lt;span class="na"&gt;hover&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;anything goes here.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;it just looks better in glass.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;GlassCard&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  LiquidGlass
&lt;/h3&gt;

&lt;p&gt;The frosted glass pill that started everything. Backdrop blur, inner shadows, saturation boost. Our navigation bar uses it — look up at this page right now.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LiquidGlass&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"px-6 py-3"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;frosted. blurred. alive.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;LiquidGlass&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  TerminalCompare
&lt;/h3&gt;

&lt;p&gt;Before and after, but make it terminal. Drag the divider. Watch chaos become clarity. Perfect for showcasing transformations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;TerminalCompare&lt;/span&gt;
  &lt;span class="na"&gt;before&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;before&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[...]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;after&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;after&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[...]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Animation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  TextReveal
&lt;/h3&gt;

&lt;p&gt;Words materialize one by one as you scroll. Character mode for when you want to be dramatic. Zero layout shift, accessible.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;TextReveal&lt;/span&gt;
  &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"the components you wish existed."&lt;/span&gt;
  &lt;span class="na"&gt;wordBased&lt;/span&gt;
  &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  AnimatedCounter
&lt;/h3&gt;

&lt;p&gt;Numbers that count up from 0 when they scroll into view. Spring easing so they decelerate naturally. Configurable suffix and duration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;AnimatedCounter&lt;/span&gt;
  &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;2847&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;suffix&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;" emails sent"&lt;/span&gt;
  &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  SectionWrapper
&lt;/h3&gt;

&lt;p&gt;Every section on aumiqx.com uses this. Viewport-triggered fade-in. Optional full-height. Handles intersection observer internally so you never think about it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;SectionWrapper&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"hero"&lt;/span&gt; &lt;span class="na"&gt;fullHeight&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;everything inside fades in.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;SectionWrapper&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Terminal Components
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Terminal Emulator
&lt;/h3&gt;

&lt;p&gt;A full pseudo-shell with 25+ commands, filesystem simulation, tab autocomplete, and command history. Not xterm.js — this is a simulated terminal designed for marketing, docs, and product demos.&lt;/p&gt;

&lt;p&gt;Supports: &lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;cd&lt;/code&gt;, &lt;code&gt;cat&lt;/code&gt;, &lt;code&gt;run&lt;/code&gt;, &lt;code&gt;status&lt;/code&gt;, &lt;code&gt;whoami&lt;/code&gt;, &lt;code&gt;help&lt;/code&gt;, &lt;code&gt;clear&lt;/code&gt;, and 17 more commands. Each with contextual output, colored terminal lines, and easter eggs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terminal Form
&lt;/h3&gt;

&lt;p&gt;Sequential form as terminal interface. 7 questions with validation, multiple choice, progress bar, and submission animation. Way more engaging than traditional form fields.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cmd+K Navigation
&lt;/h3&gt;

&lt;p&gt;Full command palette with Fuse.js fuzzy search across every page on your site. Keyboard navigation, categorized results (news, tools, cities, pages), freshness indicators. Always available via Cmd+K.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Source Primitives
&lt;/h2&gt;

&lt;p&gt;Beyond components, we're building foundational libraries:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://aumiqx.com/labs/gesture/" rel="noopener noreferrer"&gt;@aumiqx/gesture&lt;/a&gt;&lt;/strong&gt; — Pure-math gesture recognition. ~6KB, zero deps. Classify taps, swipes, flicks from raw coordinates without the DOM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://aumiqx.com/labs/scroll/" rel="noopener noreferrer"&gt;@aumiqx/scroll&lt;/a&gt;&lt;/strong&gt; — Programmable scroll physics. ~4KB, zero deps. Per-section friction, magnetic snap, configurable mass.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;@aumiqx/pixels&lt;/strong&gt; (coming soon) — React-to-image without a browser. Yog + Pretext + Skia WASM.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Battle Test
&lt;/h2&gt;

&lt;p&gt;Visit &lt;a href="https://aumiqx.com/ui/" rel="noopener noreferrer"&gt;aumiqx.com/ui&lt;/a&gt; and you'll see a side-by-side comparison: a boring HTML &lt;code&gt;&amp;lt;button&amp;gt;Click me&amp;lt;/button&amp;gt;&lt;/code&gt; next to our SlideToUnlock component. Same action. Completely different feel. The contrast sells itself.&lt;/p&gt;

&lt;p&gt;Every gesture CTA on the page has a &lt;strong&gt;live inline demo&lt;/strong&gt; — you can drag, flip, hold, and pull right on the page. No CodeSandbox, no iframe. The demo IS the component.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @aumiqx/ui
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or copy-paste individual components — they're all self-contained files with zero inter-dependencies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aumiqx.com/ui/" rel="noopener noreferrer"&gt;Live Gallery&lt;/a&gt; | &lt;a href="https://github.com/aumiqx/ui" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://www.npmjs.com/package/@aumiqx/ui" rel="noopener noreferrer"&gt;npm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MIT licensed. Framer Motion + Tailwind CSS. Dark theme optimized. Tree-shakeable.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://aumiqx.com" rel="noopener noreferrer"&gt;Aumiqx&lt;/a&gt; — we build AI agents, workflow automations, and open-source tools. These components power our own site.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>typescript</category>
      <category>ui</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Every Website on Earth Uses the Same Scroll Physics. Why?</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Fri, 03 Apr 2026 11:19:25 +0000</pubDate>
      <link>https://dev.to/axitslab/every-website-on-earth-uses-the-same-scroll-physics-why-3ek8</link>
      <guid>https://dev.to/axitslab/every-website-on-earth-uses-the-same-scroll-physics-why-3ek8</guid>
      <description>&lt;p&gt;Open any website. Scroll. Now open a different website. Scroll again.&lt;/p&gt;

&lt;p&gt;Same feel. Same momentum. Same friction. Same inertia.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Every website on Earth uses identical scroll physics&lt;/strong&gt; because the browser's scroll engine is a black box. You can listen to events, smooth them (Lenis), animate things on scroll (GSAP ScrollTrigger). But you cannot change how scroll &lt;em&gt;feels&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Why should a long-form article scroll at the same speed as a photo gallery? Why should a CTA section fly past at the same momentum as your hero?&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing @aumiqx/scroll
&lt;/h2&gt;

&lt;p&gt;A programmable scroll physics engine for the web. &lt;strong&gt;~4KB. Zero dependencies.&lt;/strong&gt; Pure computation — it calculates where scroll should be, you decide what moves.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ScrollEngine&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aumiqx/scroll&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;engine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ScrollEngine&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;mass&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;1.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;          &lt;span class="c1"&gt;// heavier = more momentum&lt;/span&gt;
  &lt;span class="na"&gt;friction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.95&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;     &lt;span class="c1"&gt;// lower = stops faster&lt;/span&gt;
  &lt;span class="na"&gt;zones&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;friction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.82&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;     &lt;span class="c1"&gt;// hero: heavy, cinematic&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;friction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.975&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;  &lt;span class="c1"&gt;// gallery: featherlight&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;snap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;       &lt;span class="c1"&gt;// pricing: magnetic snap&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;magnets&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;strength&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;range&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;120&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your hero section &lt;strong&gt;resists&lt;/strong&gt;. Your gallery &lt;strong&gt;glides&lt;/strong&gt;. Your pricing section &lt;strong&gt;magnetically grabs&lt;/strong&gt; the user and holds them there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Physics
&lt;/h2&gt;

&lt;p&gt;Every frame, four things happen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;velocity&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;force&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;mass&lt;/span&gt;        &lt;span class="c1"&gt;// input from wheel/touch&lt;/span&gt;
&lt;span class="nx"&gt;velocity&lt;/span&gt; &lt;span class="o"&gt;*=&lt;/span&gt; &lt;span class="nx"&gt;zoneFriction&lt;/span&gt;        &lt;span class="c1"&gt;// per-section decay&lt;/span&gt;
&lt;span class="nx"&gt;velocity&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;magnetPull&lt;/span&gt;           &lt;span class="c1"&gt;// nearby magnets attract&lt;/span&gt;
&lt;span class="nx"&gt;position&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;velocity&lt;/span&gt;             &lt;span class="c1"&gt;// update scroll position&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the entire engine. Four lines of math, 60 times per second.&lt;/p&gt;

&lt;p&gt;The engine doesn't touch the DOM. It computes position. You apply it however you want — CSS transforms, Canvas, WebGL camera, or just &lt;code&gt;window.scrollTo()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;element&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;wheel&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;preventDefault&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nx"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;applyForce&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;deltaY&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;passive&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;tick&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tick&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nx"&gt;element&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;transform&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`translateY(&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;position&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;px)`&lt;/span&gt;
  &lt;span class="nf"&gt;requestAnimationFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tick&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nf"&gt;tick&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Zone Types
&lt;/h2&gt;

&lt;p&gt;The power is in &lt;strong&gt;zones&lt;/strong&gt; — regions of your page where scroll physics change:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Zone Type&lt;/th&gt;
&lt;th&gt;Friction&lt;/th&gt;
&lt;th&gt;What It Feels Like&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reading&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.82&lt;/td&gt;
&lt;td&gt;Heavy. Every pixel matters. Content demands attention.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gallery&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.975&lt;/td&gt;
&lt;td&gt;Featherlight. Momentum carries you through. Browsing mode.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Snap&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;magnetic&lt;/td&gt;
&lt;td&gt;Pulls to center when you slow down. Can't accidentally skip.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CTA&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.80&lt;/td&gt;
&lt;td&gt;Maximum resistance. The call-to-action anchors you.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each zone is defined by start/end positions and a friction override:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;zones&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;friction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.82&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;       &lt;span class="c1"&gt;// hero&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;friction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.975&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;    &lt;span class="c1"&gt;// features&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;snap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;         &lt;span class="c1"&gt;// pricing (snaps to center)&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;friction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.80&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;    &lt;span class="c1"&gt;// CTA&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Magnetic Snap Points
&lt;/h2&gt;

&lt;p&gt;Important content gets magnets — invisible attraction points that &lt;strong&gt;pull scroll toward them&lt;/strong&gt; when you're nearby:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;magnets&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;strength&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;range&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;120&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;  &lt;span class="c1"&gt;// pricing&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;strength&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;range&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;   &lt;span class="c1"&gt;// CTA&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pull is proportional to proximity — stronger as you get closer, zero outside the range. Users can't accidentally fly past your pricing table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configurable Mass
&lt;/h2&gt;

&lt;p&gt;One number changes the entire feel of your site:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;mass: 0.5&lt;/strong&gt; — Snappy, responsive. Stops quickly. Good for utility apps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mass: 1.2&lt;/strong&gt; — Balanced. Natural momentum.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mass: 4.0&lt;/strong&gt; — Heavy, cinematic. A single flick carries you through sections. Feels like pushing something physical.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Mass divides the input force: &lt;code&gt;velocity += force / mass&lt;/code&gt;. Heavier scroll responds less to each input, but carries more momentum once moving.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Compares
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Native Scroll&lt;/th&gt;
&lt;th&gt;Lenis&lt;/th&gt;
&lt;th&gt;GSAP ScrollTrigger&lt;/th&gt;
&lt;th&gt;@aumiqx/scroll&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Custom friction&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Per-section&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Magnetic snap&lt;/td&gt;
&lt;td&gt;CSS only&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Configurable&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Custom mass&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bounce walls&lt;/td&gt;
&lt;td&gt;Platform-specific&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Elastic&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pure computation&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;DOM-dependent&lt;/td&gt;
&lt;td&gt;DOM-dependent&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;No DOM&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Size&lt;/td&gt;
&lt;td&gt;Built-in&lt;/td&gt;
&lt;td&gt;5KB&lt;/td&gt;
&lt;td&gt;25KB+&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~4KB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What You Can Build
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Storytelling Landing Pages&lt;/strong&gt; — Hero = slow and dramatic. Gallery = fast and fluid. CTA = magnetic snap. Each section of your page has different scroll physics, creating a journey, not just a page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WebGL Camera Control&lt;/strong&gt; — Use scroll physics to drive a 3D camera. Mass and inertia create Steadicam-like movement instead of mechanical stepping. Combine with zones to slow the camera for reveals and speed up for transitions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reading Experiences&lt;/strong&gt; — Long-form articles automatically increase friction. Readers absorb more content without consciously stopping to read. Combine with magnets at key paragraphs to create natural "reading rest points."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E-commerce Product Scroller&lt;/strong&gt; — Product listings glide with low friction for fast browsing. Category changes snap into place. The checkout CTA has maximum friction and a magnet — you can't scroll past it by accident.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scroll-Driven Games&lt;/strong&gt; — A game framework where scroll IS the primary input. Platformers, runners, and puzzle games controlled entirely by scroll physics. The engine becomes the game engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Runtime Reconfiguration
&lt;/h2&gt;

&lt;p&gt;Change physics on the fly — respond to user preferences or viewport changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// User enables "reduced motion"&lt;/span&gt;
&lt;span class="nx"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;mass&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;friction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.85&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;// Viewport resized — recalculate zones&lt;/span&gt;
&lt;span class="nx"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scrollHeight&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Full State Introspection
&lt;/h2&gt;

&lt;p&gt;Every frame, the engine returns complete state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tick&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;position&lt;/span&gt;      &lt;span class="c1"&gt;// current scroll position (px)&lt;/span&gt;
&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;velocity&lt;/span&gt;      &lt;span class="c1"&gt;// current speed (px/frame)&lt;/span&gt;
&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;activeZone&lt;/span&gt;    &lt;span class="c1"&gt;// which zone index (-1 if none)&lt;/span&gt;
&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nearestMagnet&lt;/span&gt; &lt;span class="c1"&gt;// magnet index in range (null if none)&lt;/span&gt;
&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isBouncing&lt;/span&gt;    &lt;span class="c1"&gt;// hitting a wall&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use velocity for parallax effects, active zone for section highlighting, magnet proximity for visual feedback.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Live
&lt;/h2&gt;

&lt;p&gt;We built an &lt;a href="https://aumiqx.com/labs/scroll/" rel="noopener noreferrer"&gt;interactive demo&lt;/a&gt; with 5 zones you can scroll through. Each zone has different physics — you'll feel the difference immediately. There's a mass slider so you can experience heavy vs light scroll in real-time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @aumiqx/scroll
&lt;span class="c"&gt;# or&lt;/span&gt;
pnpm add @aumiqx/scroll
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/aumiqx/scroll" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://www.npmjs.com/package/@aumiqx/scroll" rel="noopener noreferrer"&gt;npm&lt;/a&gt; | &lt;a href="https://aumiqx.com/labs/scroll/" rel="noopener noreferrer"&gt;Live Demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MIT licensed. TypeScript. Zero dependencies. ~4KB.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://aumiqx.com" rel="noopener noreferrer"&gt;Aumiqx&lt;/a&gt; — we build AI agents, workflow automations, and open-source tools that shouldn't work but do.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>typescript</category>
      <category>webdev</category>
      <category>ux</category>
    </item>
    <item>
      <title>Every Gesture Library on the Web is Wrong. Here's Why.</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Fri, 03 Apr 2026 11:19:15 +0000</pubDate>
      <link>https://dev.to/axitslab/every-gesture-library-on-the-web-is-wrong-heres-why-43bo</link>
      <guid>https://dev.to/axitslab/every-gesture-library-on-the-web-is-wrong-heres-why-43bo</guid>
      <description>&lt;p&gt;Every gesture library hooks into DOM events. Hammer.js (deprecated). use-gesture (React-only). interact.js (DOM-dependent).&lt;/p&gt;

&lt;p&gt;But a gesture is just math.&lt;/p&gt;

&lt;p&gt;A swipe is fast displacement along one axis. A tap is low drift over short time. A flick is high velocity over minimal distance. None of this requires a browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing @aumiqx/gesture
&lt;/h2&gt;

&lt;p&gt;Pure-math gesture recognition for JavaScript. &lt;strong&gt;~6KB. Zero dependencies.&lt;/strong&gt; Works in React, Node.js, Canvas, WebGL, Deno, Bun, or a server analyzing session replays.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;recognize&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aumiqx/gesture&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;gesture&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;recognize&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;t&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;195&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;t&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;450&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;t&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;// {&lt;/span&gt;
&lt;span class="c1"&gt;//   type: "swipe",&lt;/span&gt;
&lt;span class="c1"&gt;//   direction: "right",&lt;/span&gt;
&lt;span class="c1"&gt;//   velocity: 5.47,&lt;/span&gt;
&lt;span class="c1"&gt;//   confidence: 0.92,&lt;/span&gt;
&lt;span class="c1"&gt;//   distance: 300,&lt;/span&gt;
&lt;span class="c1"&gt;//   curvature: 0.01,&lt;/span&gt;
&lt;span class="c1"&gt;//   predictedEnd: { x: 520, y: 203 }&lt;/span&gt;
&lt;span class="c1"&gt;// }&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Input coordinates. Get classification. That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem We Solved
&lt;/h2&gt;

&lt;p&gt;Every existing gesture library is &lt;strong&gt;event-driven&lt;/strong&gt;. They attach to the DOM, listen for pointer events, maintain internal state machines, and fire callbacks. This creates three fundamental limitations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;DOM dependency&lt;/strong&gt; — Can't use them on Canvas, WebGL, or server-side&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Framework lock-in&lt;/strong&gt; — use-gesture is React-only, Hammer.js is vanilla-only&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No retroactive analysis&lt;/strong&gt; — Can't classify gestures from logged data after the fact&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Our library flips the model. A gesture is a &lt;strong&gt;mathematical pattern&lt;/strong&gt; in a sequence of (x, y, timestamp) coordinates. The classification is pure computation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Straight-line distance&lt;/strong&gt; between first and last point&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Path curvature&lt;/strong&gt; — total path length divided by straight distance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Velocity&lt;/strong&gt; — distance over time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Max drift&lt;/strong&gt; from centroid — measures how stationary the touch was&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration&lt;/strong&gt; — total time elapsed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These five numbers determine every gesture type. No events needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Detects
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Detection Logic&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;tap&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;duration &amp;lt; 300ms, drift &amp;lt; 10px&lt;/td&gt;
&lt;td&gt;Button clicks, selection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;long-press&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;duration &amp;gt; 500ms, drift &amp;lt; 15px&lt;/td&gt;
&lt;td&gt;Context menus, drag mode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;swipe&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;distance &amp;gt; 30px, velocity &amp;gt; 0.3 px/ms&lt;/td&gt;
&lt;td&gt;Navigation, card dismissal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;flick&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;velocity &amp;gt; 1.5 px/ms, duration &amp;lt; 200ms&lt;/td&gt;
&lt;td&gt;List scrolling, page turning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;pan&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;slow movement, exceeds tap threshold&lt;/td&gt;
&lt;td&gt;Map dragging, object moving&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each result includes a &lt;strong&gt;confidence score&lt;/strong&gt; (0-1), direction (up/down/left/right), and a &lt;strong&gt;predicted endpoint&lt;/strong&gt; — where the gesture would land if it continued with friction decay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mid-Gesture Prediction
&lt;/h2&gt;

&lt;p&gt;This is the feature that changes how you build UIs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;predict&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aumiqx/gesture&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;// Call WHILE the user is still moving&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;likely&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;alternatives&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;partialPoints&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;// likely.type = "swipe" (78% confidence)&lt;/span&gt;
&lt;span class="c1"&gt;// Start the transition NOW, before they finish&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your UI reacts to gestures &lt;strong&gt;before they complete&lt;/strong&gt;. The difference between "responsive" and "telepathic."&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Compares
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Hammer.js&lt;/th&gt;
&lt;th&gt;use-gesture&lt;/th&gt;
&lt;th&gt;@aumiqx/gesture&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DOM required&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;No&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Framework&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;React only&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Any / none&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Server-side&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Canvas/WebGL&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mid-gesture prediction&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bundle size&lt;/td&gt;
&lt;td&gt;7.3KB&lt;/td&gt;
&lt;td&gt;12KB&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~6KB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maintained&lt;/td&gt;
&lt;td&gt;Deprecated&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dependencies&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Real-World Use Cases
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Canvas/WebGL Games&lt;/strong&gt; — Gesture controls without DOM elements. Detect swipes, flicks, and holds directly from pointer data in your render loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session Replay Analytics&lt;/strong&gt; — Classify user gestures from logged pointer data on a server. Find rage-clicks, hesitant scrolls, and confused navigation patterns. Run in Node.js, not a browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gesture Prediction&lt;/strong&gt; — Call &lt;code&gt;predict()&lt;/code&gt; mid-gesture to start UI responses before the gesture completes. Snappy interfaces that feel like they read your mind.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility&lt;/strong&gt; — Classify shaky or imprecise input from users with motor impairments. Distinguish intentional gestures from tremor-induced movement by adjusting thresholds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom Gesture Vocabularies&lt;/strong&gt; — Combine recognized gestures into compound patterns. Swipe-then-hold = drag mode. Double-tap-then-swipe = selection gesture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Math Under the Hood
&lt;/h2&gt;

&lt;p&gt;Every function is exported so you can build custom recognizers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;distance&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;            &lt;span class="c1"&gt;// Euclidean between two points&lt;/span&gt;
  &lt;span class="nx"&gt;straightLineDistance&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// First to last point&lt;/span&gt;
  &lt;span class="nx"&gt;totalPathLength&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;     &lt;span class="c1"&gt;// Sum of all segments&lt;/span&gt;
  &lt;span class="nx"&gt;velocity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;            &lt;span class="c1"&gt;// distance / time&lt;/span&gt;
  &lt;span class="nx"&gt;curvature&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;           &lt;span class="c1"&gt;// path / straight - 1 (0 = perfectly straight)&lt;/span&gt;
  &lt;span class="nx"&gt;maxDrift&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;            &lt;span class="c1"&gt;// max distance from centroid&lt;/span&gt;
  &lt;span class="nx"&gt;centroid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;            &lt;span class="c1"&gt;// average position of all points&lt;/span&gt;
  &lt;span class="nx"&gt;angle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;               &lt;span class="c1"&gt;// atan2 direction in radians&lt;/span&gt;
  &lt;span class="nx"&gt;predictEnd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;          &lt;span class="c1"&gt;// extrapolate from velocity + friction&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aumiqx/gesture&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The entire library is trigonometry, linear regression, and threshold matching. No ML. No neural networks. No WASM. Just math.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Live
&lt;/h2&gt;

&lt;p&gt;We built an &lt;a href="https://aumiqx.com/labs/gesture/" rel="noopener noreferrer"&gt;interactive demo&lt;/a&gt; with 12 gesture presets — tap, swipe (4 directions), flick, long-press, pan, circle, zigzag, diagonal, and unknown.&lt;/p&gt;

&lt;p&gt;Click any preset to see the engine classify it step-by-step. Or draw your own gesture on the canvas and watch real-time prediction as you move.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @aumiqx/gesture
&lt;span class="c"&gt;# or&lt;/span&gt;
pnpm add @aumiqx/gesture
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/aumiqx/gesture" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://www.npmjs.com/package/@aumiqx/gesture" rel="noopener noreferrer"&gt;npm&lt;/a&gt; | &lt;a href="https://aumiqx.com/labs/gesture/" rel="noopener noreferrer"&gt;Live Demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MIT licensed. TypeScript. Zero dependencies. ~6KB.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://aumiqx.com" rel="noopener noreferrer"&gt;Aumiqx&lt;/a&gt; — we build AI agents, workflow automations, and open-source tools that shouldn't work but do.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>typescript</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>We Built an AI That Rewrites Its Own Brain. Here's What Happened.</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Wed, 01 Apr 2026 14:28:09 +0000</pubDate>
      <link>https://dev.to/axitslab/we-built-an-ai-that-rewrites-its-own-brain-heres-what-happened-1hke</link>
      <guid>https://dev.to/axitslab/we-built-an-ai-that-rewrites-its-own-brain-heres-what-happened-1hke</guid>
      <description>&lt;h2&gt;
  
  
  The Question That Started Everything
&lt;/h2&gt;

&lt;p&gt;It started with a simple observation that nobody in the AI industry wants to talk about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Every AI agent in existence is a task executor.&lt;/strong&gt; You give it a prompt. It executes. It dies. The next time you call it, it starts from zero. No memory of what it learned. No growth. No curiosity. Nothing.&lt;/p&gt;

&lt;p&gt;ChatGPT doesn't get smarter the more you use it. Claude Code doesn't learn your codebase between sessions. Devin doesn't improve its development skills over time. They're all stateless function calls dressed up as intelligence.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;f(prompt) = response. Call it a million times. It never gets smarter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We kept asking ourselves: what would it take to build an AI that actually &lt;em&gt;learns&lt;/em&gt;? Not one that stores context better, or retrieves memories more efficiently — but one that fundamentally &lt;strong&gt;changes how it thinks&lt;/strong&gt; based on experience?&lt;/p&gt;

&lt;p&gt;That question led us down a rabbit hole that lasted weeks. We explored multi-agent swarms, persistent memory architectures, knowledge graphs, cognitive science papers on predictive coding. We talked to other AI models about the problem. We read about MiroFish (47K stars on GitHub) and their multi-agent simulation engine. We studied the Claude Code source code to understand how the best AI coding agent actually works under the hood.&lt;/p&gt;

&lt;p&gt;And through all of that, one idea kept surfacing that was so obvious we almost missed it:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI can write code. AI can read code. So why can't AI read its own code, find weaknesses, and rewrite itself to be better?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's not science fiction. That's three capabilities that already exist, combined in a way nobody has tried.&lt;/p&gt;

&lt;p&gt;So we built it. And called it &lt;strong&gt;curious&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Curious?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/aumiqx/curious" rel="noopener noreferrer"&gt;Curious&lt;/a&gt; is a self-evolving cognitive architecture. That sounds like a mouthful, so let's break it down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Self-evolving&lt;/strong&gt; — it reads its own source code, finds weaknesses, rewrites the code, tests if the change made things better, and keeps what works&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cognitive&lt;/strong&gt; — it doesn't just process tasks. It predicts, observes, gets surprised, learns from surprise, and directs its own curiosity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture&lt;/strong&gt; — it's a framework. You bring any LLM (OpenAI, Ollama, Groq). Curious provides the cognitive layer on top&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it this way:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;What It Does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;LLM (GPT-4o, Llama, etc.)&lt;/td&gt;
&lt;td&gt;The raw intelligence — can read, write, reason&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Curious&lt;/td&gt;
&lt;td&gt;The cognitive architecture — makes the LLM learn, predict, self-improve&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;LLMs are smart brains with amnesia. Curious gives them a hippocampus.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But that's the boring explanation. The interesting part is what we added on day two of building it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solving vs. Learning: The Paradigm Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;Every AI product in existence operates in the &lt;strong&gt;solving paradigm&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Receive a task&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apply reasoning&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Return an answer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Die&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;LangChain? Solving. AutoGPT? Solving. CrewAI? Solving. Claude Code? Solving. Even MiroFish's multi-agent simulation — input, simulate, output, done.&lt;/p&gt;

&lt;p&gt;Humans don't work this way. Humans operate in the &lt;strong&gt;learning paradigm&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Continuously observe&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build mental models (predictions about how things work)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Get surprised when predictions are wrong&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the models&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repeat forever&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A human developer doesn't "solve" the problem of understanding a codebase. They absorb it gradually — reading code, making assumptions, testing those assumptions, being surprised, updating their understanding. After two months, they don't just know the codebase. They &lt;em&gt;understand&lt;/em&gt; it.&lt;/p&gt;

&lt;p&gt;No AI system does this. Not one.&lt;/p&gt;

&lt;p&gt;The AI industry is in an arms race to solve tasks faster. Nobody is building systems that &lt;em&gt;learn&lt;/em&gt;. That's the gap we're exploring with Curious.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Three Ingredients of Learning
&lt;/h3&gt;

&lt;p&gt;We went back to cognitive science. What makes a human brain actually learn?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Surprise.&lt;/strong&gt; Your brain constantly predicts what will happen next. When reality doesn't match — surprise. That signal drives learning. You don't learn from things you already understand. You learn from things that break your predictions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Curiosity.&lt;/strong&gt; Not random exploration. Curiosity is the pull toward the &lt;em&gt;boundary&lt;/em&gt; of your knowledge — the frontier where understanding breaks down. The most curious people are the most aware of what they don't know.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Model-building.&lt;/strong&gt; You don't memorize facts. You build compressed representations of how things work. "Gravity pulls things down." "This codebase uses the repository pattern." Models let you predict. Predictions let you be surprised. Surprise drives learning. &lt;strong&gt;That's the loop.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Curious implements all three.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture: How Curious Actually Works
&lt;/h2&gt;

&lt;p&gt;Curious has two halves: the &lt;strong&gt;seed&lt;/strong&gt; (evolvable) and the &lt;strong&gt;harness&lt;/strong&gt; (untouchable).&lt;/p&gt;

&lt;h3&gt;
  
  
  The Seed (the AI rewrites this)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;world_model.py&lt;/td&gt;
&lt;td&gt;Stores predictions with confidence scores — "if X, then Y"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;learner.py&lt;/td&gt;
&lt;td&gt;Computes surprise when predictions are wrong, extracts lessons&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;curiosity.py&lt;/td&gt;
&lt;td&gt;Finds knowledge frontiers — areas of lowest confidence&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;metacognition.py&lt;/td&gt;
&lt;td&gt;Observes the learning process itself — "am I learning well?"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;experimenter.py&lt;/td&gt;
&lt;td&gt;Generates self-experiments (so the AI doesn't need external activity)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;creator.py&lt;/td&gt;
&lt;td&gt;Creates unique artifacts daily, scored on novelty&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every one of these files is readable and writable by the AI. When the evolution cycle runs, the AI:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Reads its own source code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Analyzes which module is weakest&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Proposes a specific improvement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rewrites the file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tests if the new code is valid&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Measures if fitness improved&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keeps the change or reverts&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every self-modification is a git commit. You can literally read the diff of an AI improving its own brain.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Harness (laws of physics)
&lt;/h3&gt;

&lt;p&gt;The harness is the code the AI &lt;em&gt;cannot&lt;/em&gt; modify. It's the evolution loop itself, the fitness measurement, the sandbox. Think of it as the laws of physics that the AI lives within. It can learn, adapt, and evolve — but it can't change the rules of the game.&lt;/p&gt;

&lt;p&gt;This is the safety boundary. The AI experiments on its own cognitive code, not on the world. Every modification is sandboxed, validated, and auto-reverted if it breaks anything.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Cognitive Loop
&lt;/h3&gt;

&lt;p&gt;Every cycle, Curious runs this loop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observe&lt;/strong&gt; — watch the project (git changes, file modifications, errors)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Self-experiment&lt;/strong&gt; — generate testable predictions about its own behavior&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resolve&lt;/strong&gt; — check which predictions came true and which didn't&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learn&lt;/strong&gt; — extract lessons from surprises (high-confidence wrong predictions)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Predict&lt;/strong&gt; — make new predictions informed by lessons&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Explore&lt;/strong&gt; — curiosity identifies knowledge gaps, investigates them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Evolve&lt;/strong&gt; — read own code, rewrite weakest module, test improvement&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This loop runs every 6 hours via GitHub Actions. The AI wakes up, observes, learns, evolves, and goes back to sleep. Every cycle, the code is a little different from the last.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cold Start Problem (and How We Accidentally Solved It)
&lt;/h2&gt;

&lt;p&gt;We hit an obvious problem immediately: &lt;strong&gt;if nobody is actively working on the repo, there's nothing to observe. No observations = no predictions = no learning.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We were running Curious on a project repository. But at midnight when the GitHub Action fires, nobody is committing code. The AI would observe an empty diff and learn nothing.&lt;/p&gt;

&lt;p&gt;The solution was embarrassingly obvious: &lt;strong&gt;the AI experiments on itself.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We added an &lt;code&gt;experimenter.py&lt;/code&gt; module (itself evolvable) that generates self-referential experiments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"I predict my prediction count will increase next cycle" (tests growth)&lt;/li&gt;
&lt;li&gt;"I predict all my seed files will remain syntactically valid" (tests stability)&lt;/li&gt;
&lt;li&gt;"I predict my accuracy will change after resolving experiments" (tests learning)&lt;/li&gt;
&lt;li&gt;"I predict at least one prediction will be resolved next cycle" (tests resolution)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are real, testable predictions that the system can resolve without any external activity. The AI's own behavior IS the data it learns from.&lt;/p&gt;

&lt;p&gt;The impact was immediate. Before self-experiments:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Predictions resolved per cycle&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;4-8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Accuracy&lt;/td&gt;
&lt;td&gt;0% (nothing to measure)&lt;/td&gt;
&lt;td&gt;100% (12/12)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fitness score&lt;/td&gt;
&lt;td&gt;35%&lt;/td&gt;
&lt;td&gt;82%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The AI went from learning nothing to learning rapidly — because it created its own curriculum.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The cold start problem isn't about data. It's about activity. If the AI can generate its own activity, it can learn in a vacuum.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Creation Engine: Can AI Be Genuinely Creative?
&lt;/h2&gt;

&lt;p&gt;The learning loop was working. But a learning system that only learns about itself is an interesting research artifact, not a product. We needed the AI to DO something with its intelligence.&lt;/p&gt;

&lt;p&gt;This is where it gets weird.&lt;/p&gt;

&lt;p&gt;We added a &lt;strong&gt;creation engine&lt;/strong&gt;. Every day, the AI creates something — a working artifact, not just an idea — and gets scored on uniqueness. The score feeds back into the next creation. The creations should get more novel over time as the AI learns what "unique" means.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Uniqueness Score
&lt;/h3&gt;

&lt;p&gt;Every creation is evaluated on four dimensions:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Max Score&lt;/th&gt;
&lt;th&gt;What It Measures&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Concept Novelty&lt;/td&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;td&gt;Has this idea existed before?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Implementation Novelty&lt;/td&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;td&gt;Is the technical approach itself new?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Structural Novelty&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;Did it invent its own paradigm?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Naming/Language Novelty&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;Did it create its own vocabulary?&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Total: 0-100. The AI sees the breakdown and the feedback after each creation. It knows exactly why the score was low and what would make it higher.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 1: Fluctuverse (47/100)
&lt;/h3&gt;

&lt;p&gt;The first creation was called "Fluctuverse" — a self-evolving virtual universe. Sounds cool, right? The uniqueness scorer wasn't impressed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Concept: 18/30 — procedural generation exists&lt;/li&gt;
&lt;li&gt;Implementation: 12/30 — used Pygame, a conventional framework&lt;/li&gt;
&lt;li&gt;Structure: 7/20 — standard file structure&lt;/li&gt;
&lt;li&gt;Naming: 10/20 — some invented terms, mostly conventional&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Feedback: &lt;em&gt;"To enhance uniqueness, consider developing a novel algorithm that isn't based on random movements. Introduce innovative rendering techniques. Create a new vocabulary for the universe's entities."&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 2: Quintessension (71/100)
&lt;/h3&gt;

&lt;p&gt;The AI read the feedback. It learned. The second creation jumped to 71/100:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Concept: 25/30 — a self-evolving narrative system based on non-linear time-space interactions&lt;/li&gt;
&lt;li&gt;Implementation: 18/30 — invented its own language ("Quintessence Language")&lt;/li&gt;
&lt;li&gt;Structure: 12/20 — multi-dimensional entity system&lt;/li&gt;
&lt;li&gt;Naming: 16/20 — entirely new vocabulary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;47 to 71 in one iteration.&lt;/strong&gt; The AI didn't just try again. It read the specific feedback about what was conventional and deliberately pushed away from it. It stopped using existing frameworks. It invented its own language. It created a concept that doesn't map to any existing product category.&lt;/p&gt;

&lt;p&gt;This is the experiment running live. Every day at midnight UTC, the AI creates something new. The &lt;code&gt;creations/&lt;/code&gt; directory in the repo fills up. You can watch the uniqueness scores over time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The question isn't whether AI can generate code. The question is whether AI can generate something &lt;em&gt;nobody has ever imagined&lt;/em&gt;. That's what the uniqueness score measures.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Metacognition: The AI That Watches Itself Think
&lt;/h2&gt;

&lt;p&gt;The deepest module in Curious is &lt;code&gt;metacognition.py&lt;/code&gt;. It doesn't think about the domain. It thinks about &lt;strong&gt;how the system is thinking.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In cognitive science, metacognition is "thinking about thinking." It's the voice in your head that says:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"I notice I keep avoiding this topic — why?"&lt;/li&gt;
&lt;li&gt;"My understanding isn't improving — maybe my strategy is wrong"&lt;/li&gt;
&lt;li&gt;"That thought was unusual — I should explore why it came up"&lt;/li&gt;
&lt;li&gt;"I'm going in circles — time to try a different approach"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Curious has a basic version of this. The metacognition module:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Reads ALL the other seed files (the AI's own cognitive code)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reads the current fitness metrics (accuracy, learning speed, etc.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Analyzes: "What's working? What's weak? What would I change?"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Proposes a specific modification to a specific file&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's the key: &lt;strong&gt;metacognition.py is itself evolvable.&lt;/strong&gt; The AI can modify how it thinks about its own thinking. It can change the criteria it uses to evaluate its own code. It can add new self-evaluation metrics. It can change its own improvement strategy.&lt;/p&gt;

&lt;p&gt;This is recursive self-improvement in its simplest form. Not theoretical. Not hypothetical. Running on GitHub Actions every 6 hours.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;Without metacognition, the AI would:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make changes randomly&lt;/li&gt;
&lt;li&gt;Not know which changes helped&lt;/li&gt;
&lt;li&gt;Not learn what KIND of changes are productive&lt;/li&gt;
&lt;li&gt;Run #100 would be no smarter than run #1&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With metacognition:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Changes are targeted at the weakest module&lt;/li&gt;
&lt;li&gt;The AI explains WHY it's making each change&lt;/li&gt;
&lt;li&gt;It can detect when it's stuck (accuracy plateauing)&lt;/li&gt;
&lt;li&gt;It can change its own improvement strategy when one isn't working&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The difference is between random mutation and directed evolution. Between a monkey with a typewriter and a writer who reads their own drafts.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We've Learned So Far (Honest Assessment)
&lt;/h2&gt;

&lt;p&gt;We shared the Curious concept with three different AI models — Claude, ChatGPT, and Gemini — and asked for their honest assessment. Here's what they converged on:&lt;/p&gt;

&lt;h3&gt;
  
  
  What's Real
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The insight is genuine.&lt;/strong&gt; "LLMs are brains with amnesia" is a real problem. The solving-vs-learning paradigm distinction is underexplored. Nobody owns this layer yet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The architecture is sound.&lt;/strong&gt; Prediction, surprise, curiosity, metacognition — these map directly to cognitive science primitives (predictive coding, active inference, meta-learning).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The positioning is differentiated.&lt;/strong&gt; This isn't another agent framework. "LangChain is plumbing for solving. Curious is architecture for learning." That's a real category distinction.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What's Overstated
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The LLM doesn't actually get smarter.&lt;/strong&gt; What improves is the context and code architecture around it. The model weights never change. Claude's criticism was the sharpest: "It's scheduled LLM calls branded as metacognition." Fair point.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The world model is a database.&lt;/strong&gt; We call it a "world model" but it's really predictions stored in SQLite with confidence scores. That exists (Mem0, Zep, LlamaIndex). The architecture around it is novel; the storage isn't.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost is unaddressed.&lt;/strong&gt; A continuous curiosity loop with API calls is expensive. This only works comfortably with local models (Ollama) or very cheap models (GPT-4o-mini).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What's Genuinely New
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Self-modifying cognitive code.&lt;/strong&gt; The AI rewriting its own learning algorithms — not just prompts, not just retrieval, but actual Python code that governs how it thinks. DSPy optimizes prompts. Voyager learns skills. Nobody does full cognitive architecture self-modification with fitness measurement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-experimentation.&lt;/strong&gt; The AI generates its own testable activity. It doesn't need external data to learn. This solves the cold-start problem in a way we haven't seen elsewhere.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creation with uniqueness optimization.&lt;/strong&gt; Using novelty as a fitness function and having the AI actively push toward unprecedented output is genuinely unexplored territory.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;ChatGPT put it best: "90% chance nobody cares. 10% chance you define a new layer in AI." We're betting on the 10%.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Three Layers of AI (and Why Layer 3 Is Empty)
&lt;/h2&gt;

&lt;p&gt;Here's how we see the AI stack forming in 2026:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;What It Is&lt;/th&gt;
&lt;th&gt;Who Owns It&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Layer 1: Models&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The raw intelligence — GPT, Claude, Llama&lt;/td&gt;
&lt;td&gt;OpenAI, Anthropic, Meta&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Layer 2: Orchestration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tools, agents, pipelines — LangChain, CrewAI&lt;/td&gt;
&lt;td&gt;Many players, commoditizing fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Layer 3: Cognition&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Learning, prediction, self-improvement, creativity&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Nobody. Yet.&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Layer 1 is a $100B+ market dominated by companies with thousands of GPUs. You can't compete there.&lt;/p&gt;

&lt;p&gt;Layer 2 is a red ocean. LangChain, CrewAI, AutoGen, Mastra, OpenAI Agents SDK — they're all fighting over the same plumbing. Commoditizing fast. No moat.&lt;/p&gt;

&lt;p&gt;Layer 3 doesn't exist as a product category. Nobody has shipped a system where the AI genuinely improves its own cognitive architecture through experience. Not because it's impossible — because everyone is too busy racing in Layers 1 and 2 to look up.&lt;/p&gt;

&lt;p&gt;Curious is our attempt to plant a flag in Layer 3. We don't know if it'll work. But we know the layer is empty.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Predict This Experiment Will Reveal
&lt;/h2&gt;

&lt;p&gt;We're running this experiment live, in public, with full transparency. Here are our predictions about what will happen:&lt;/p&gt;

&lt;h3&gt;
  
  
  High Confidence (we're 80%+ sure)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The uniqueness scores will climb.&lt;/strong&gt; The feedback loop works. Day 2 was already 50% higher than Day 1. By Day 30, we expect consistent 80+ scores.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The AI will invent its own vocabulary.&lt;/strong&gt; When pushed to maximize naming novelty, the AI will create words and concepts that don't exist in English. Some of these might actually be useful.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The self-modification git log will be fascinating.&lt;/strong&gt; The diffs of an AI rewriting its own cognitive architecture will contain patterns and approaches that human developers wouldn't have designed. This data alone will be worth studying.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Medium Confidence (50-80%)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The creations will converge on genuinely novel forms.&lt;/strong&gt; Not just novel content — novel structures, novel interaction paradigms, novel computational concepts. Things that are hard to explain because they don't fit existing categories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The metacognition module will evolve in unexpected ways.&lt;/strong&gt; When the AI modifies how it evaluates its own thinking, the direction it takes will surprise us. It might develop evaluation criteria we wouldn't have thought of.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Other developers will fork it and point it at different domains.&lt;/strong&gt; The framework is domain-agnostic. Someone will use it for music generation, game design, scientific hypothesis generation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Low Confidence but High Impact (if they happen)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The AI will produce an artifact that is genuinely useful to humans.&lt;/strong&gt; Not just novel — actually useful in a way nobody planned. A tool, a language, a paradigm that solves a real problem nobody knew they had.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The evolution will hit a phase transition.&lt;/strong&gt; A point where the AI's self-modifications compound — where one improvement enables three more, which enable ten more. Exponential self-improvement, not linear.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;This experiment will change how we think about AI creativity.&lt;/strong&gt; If a self-evolving system can consistently produce genuinely novel artifacts, that challenges the assumption that AI can only recombine existing ideas.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;We're not claiming Curious is AGI. We're claiming it's an interesting experiment in whether the cognitive primitives of learning — prediction, surprise, curiosity, metacognition — can be built with current tools and produce outcomes that matter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How to Follow the Experiment
&lt;/h2&gt;

&lt;p&gt;This experiment is 100% open source and running live on GitHub.&lt;/p&gt;

&lt;h3&gt;
  
  
  Watch It
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/aumiqx/curious" rel="noopener noreferrer"&gt;github.com/aumiqx/curious&lt;/a&gt;&lt;/strong&gt; — Star the repo. Check back weekly. The &lt;code&gt;creations/&lt;/code&gt; directory fills up daily.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git log&lt;/strong&gt; — Look for &lt;code&gt;🧬 evolve:&lt;/code&gt; commits (self-modification) and &lt;code&gt;🎨 create:&lt;/code&gt; commits (new creation)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;creations/day_NNN/README.md&lt;/strong&gt; — Each creation has a README with uniqueness scores and the AI's explanation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Run It Yourself
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;curious-ai
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sk-...

&lt;span class="c"&gt;# Watch it create&lt;/span&gt;
curious create &lt;span class="nt"&gt;--llm&lt;/span&gt; openai:gpt-4o-mini

&lt;span class="c"&gt;# Watch it learn&lt;/span&gt;
curious init &lt;span class="nt"&gt;--observe&lt;/span&gt; ./your-project
curious start

&lt;span class="c"&gt;# See what it's built&lt;/span&gt;
curious gallery

&lt;span class="c"&gt;# Ask it to explain its evolution&lt;/span&gt;
curious explain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works with any LLM: OpenAI, Ollama (free), Groq, Together, or any OpenAI-compatible API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fork It
&lt;/h3&gt;

&lt;p&gt;The framework is MIT licensed. Fork it, point it at your domain, change the fitness function, see what your version evolves into. The whole point is that each instance evolves differently based on what it observes.&lt;/p&gt;

&lt;p&gt;We especially want to see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Curious pointed at &lt;strong&gt;scientific papers&lt;/strong&gt; — can it generate novel research hypotheses?&lt;/li&gt;
&lt;li&gt;Curious pointed at &lt;strong&gt;music&lt;/strong&gt; — can it evolve a genuinely new genre?&lt;/li&gt;
&lt;li&gt;Curious pointed at &lt;strong&gt;game design&lt;/strong&gt; — can it invent a game mechanic nobody has thought of?&lt;/li&gt;
&lt;li&gt;Curious pointed at &lt;strong&gt;mathematics&lt;/strong&gt; — can it discover new patterns?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why We're Doing This in Public
&lt;/h2&gt;

&lt;p&gt;We could have built this in private, run it for 6 months, cherry-picked the best results, and announced a polished product. That's what most AI companies do.&lt;/p&gt;

&lt;p&gt;We're doing the opposite. The experiment runs in public. Every creation is committed. Every self-modification is visible. Every failure is documented. The uniqueness scores — including the bad ones — are all there.&lt;/p&gt;

&lt;p&gt;Why?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Because the process is the product.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If Curious produces something genuinely creative, the interesting thing isn't the creation itself — it's the git log that shows HOW the AI got there. The sequence of self-modifications. The evolution of its curiosity. The feedback loops that pushed it toward novelty.&lt;/p&gt;

&lt;p&gt;That journey is more valuable than any single output. And it can only happen in public, where the timeline is verifiable and the process is auditable.&lt;/p&gt;

&lt;p&gt;We're also doing it because we think the AI community needs more experiments and fewer product launches. The discourse is dominated by "look at this benchmark" and "use our new API." What's missing is: "we tried something weird and here's what happened."&lt;/p&gt;

&lt;p&gt;Curious is that experiment. We don't know the outcome. We're publishing it anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Deep Dive: Under the Hood
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The World Model
&lt;/h3&gt;

&lt;p&gt;Predictions are stored in SQLite with this structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;statement&lt;/strong&gt; — "File X will change within 24h" (specific, testable)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;confidence&lt;/strong&gt; — 0.0 to 1.0 (how sure the AI is)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;evidence&lt;/strong&gt; — what observations led to this prediction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;deadline&lt;/strong&gt; — when to check if correct&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;resolved&lt;/strong&gt; — was it right or wrong?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The world model is evolvable. The AI can change how predictions are scored, stored, and compared. It can add new fields, change the confidence algorithm, or restructure storage entirely.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Surprise Signal
&lt;/h3&gt;

&lt;p&gt;Surprise is computed as: &lt;code&gt;surprise = confidence * (1 if wrong else 0) + (1 - confidence) * (1 if right else 0)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Translation: high confidence + wrong = maximum surprise. Low confidence + right = moderate surprise. The surprise signal drives learning — the AI pays most attention to predictions where it was most confidently wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Curiosity Engine
&lt;/h3&gt;

&lt;p&gt;Curiosity identifies "knowledge frontiers" — areas where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Observation count is low (under-explored)&lt;/li&gt;
&lt;li&gt;Prediction accuracy is poor (misunderstood)&lt;/li&gt;
&lt;li&gt;No predictions exist yet (completely unknown)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI autonomously explores the highest-priority frontier. This is evolvable — the AI can change how it prioritizes frontiers.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Evolution Loop
&lt;/h3&gt;

&lt;p&gt;Every evolution cycle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Measure current fitness (accuracy, learning speed, prediction volume, coverage)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Backup current code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run metacognition: AI reads its own code + fitness → proposes change&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AI rewrites the target file using GPT-4o (stronger model for code)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Validate syntax&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If valid → keep. If broken → revert from backup&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Log the result (every evolution is tracked)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We use GPT-4o-mini for the cheap observation/prediction cycles and GPT-4o for evolution (code generation needs a stronger model). This keeps costs at ~$0.10/day.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Creation Loop
&lt;/h3&gt;

&lt;p&gt;Daily creation cycle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Load creation history (past titles, scores, feedback)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prompt the AI with past feedback and the rule: "create something that has never existed"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AI generates metadata (title, concept, why it's unique) + working code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A separate evaluator AI scores uniqueness on 4 dimensions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Score and feedback are saved and fed into the next cycle&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Everything is committed to git&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  GitHub Actions
&lt;/h3&gt;

&lt;p&gt;Two workflows run automatically:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Workflow&lt;/th&gt;
&lt;th&gt;Schedule&lt;/th&gt;
&lt;th&gt;What It Does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Self-Evolution&lt;/td&gt;
&lt;td&gt;Every 6 hours&lt;/td&gt;
&lt;td&gt;Observe → predict → learn → evolve own code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Daily Creation&lt;/td&gt;
&lt;td&gt;Midnight UTC&lt;/td&gt;
&lt;td&gt;Create something unique → score it → commit&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Both workflows commit their results. The repo's git history IS the experiment data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Honest Risks (Why This Might Not Work)
&lt;/h2&gt;

&lt;p&gt;We're not going to pretend this is a guaranteed success. Here are the real risks:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. "Fake Learning"
&lt;/h3&gt;

&lt;p&gt;The sharpest criticism: if the system just stores more context over time, that's RAG with a diary, not learning. The model itself doesn't change weights. The "improvement" might just be better retrieval, dressed up in cognitive science language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our counter:&lt;/strong&gt; The code actually changes. The prediction algorithms, the curiosity targeting, the learning strategies — all rewritten by the AI. That's not just better context. But we acknowledge: whether that constitutes "real learning" is a philosophical question we can't definitively answer.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Uniqueness Score Gaming
&lt;/h3&gt;

&lt;p&gt;The AI might learn to game the uniqueness scorer rather than being genuinely creative. It could add random neologisms, use bizarre structures, and score high on novelty while producing meaningless output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our mitigation:&lt;/strong&gt; The "concept novelty" dimension (30 points) specifically evaluates whether the idea itself is new, not just the words. But yes, this is a risk. We'll watch for it.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Evolution Plateau
&lt;/h3&gt;

&lt;p&gt;The AI might hit a ceiling where its self-modifications stop producing improvements. GPT-4o-mini's code generation capabilities are limited. Many evolution attempts already fail with syntax errors and get reverted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our plan:&lt;/strong&gt; If we plateau with GPT-4o-mini, we'll switch evolution cycles to stronger models or local models where we can run hundreds of attempts cheaply.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Cost
&lt;/h3&gt;

&lt;p&gt;Running continuous evolution + daily creation costs ~$0.10-0.20/day with API models. That's manageable. But if we scale up evolution frequency or use GPT-4o for everything, costs increase significantly.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Nobody Cares
&lt;/h3&gt;

&lt;p&gt;The most likely outcome. ChatGPT put it at 90% chance nobody cares. The AI community is flooded with "revolutionary" frameworks that go nowhere. Curious might join that graveyard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our response:&lt;/strong&gt; We're running this experiment regardless of whether it gets attention. If the AI produces genuinely novel artifacts after 30 days of self-evolution, that's interesting whether or not anyone is watching.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Comes Next
&lt;/h2&gt;

&lt;p&gt;The experiment is live. Here's the roadmap:&lt;/p&gt;

&lt;h3&gt;
  
  
  Week 1-2 (Now)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Daily creations accumulating in &lt;code&gt;creations/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Self-evolution running every 6 hours&lt;/li&gt;
&lt;li&gt;Collecting baseline data on uniqueness scores and evolution patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Week 3-4
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Analyze the first 20+ creations — are uniqueness scores actually trending up?&lt;/li&gt;
&lt;li&gt;Analyze evolution log — what did the AI change about itself and did it help?&lt;/li&gt;
&lt;li&gt;Publish intermediate results (blog post update)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Month 2
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;If results are promising: add a web dashboard to visualize the experiment live&lt;/li&gt;
&lt;li&gt;Open the creation engine for public forking — let others run their own experiments&lt;/li&gt;
&lt;li&gt;Explore multi-agent evolution: multiple Curious instances evolving differently and sharing discoveries&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Month 3+
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;If the AI has produced genuinely novel artifacts: curate and publish them&lt;/li&gt;
&lt;li&gt;If the AI has evolved its own cognitive architecture significantly: analyze the evolved code vs. the human-written v1&lt;/li&gt;
&lt;li&gt;Write the full results paper&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The experiment runs for as long as it's interesting. Which, based on the first two days, might be a while.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Invitation
&lt;/h2&gt;

&lt;p&gt;We don't know what Curious will create on Day 30. Or Day 100. We don't know if the evolution will plateau or compound. We don't know if the creations will be genuinely novel or just cleverly weird.&lt;/p&gt;

&lt;p&gt;That uncertainty is the point. This is a real experiment, not a product demo. The outcome isn't scripted.&lt;/p&gt;

&lt;p&gt;If you think AI should do more than answer questions — if you think the interesting frontier isn't "generate code faster" but "can AI genuinely learn and create?" — then follow this experiment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/aumiqx/curious" rel="noopener noreferrer"&gt;Star the repo. Watch the git log. See what the AI builds tomorrow.&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And if you want to run your own version — fork it, point it at your domain, change the fitness function, see what YOUR Curious evolves into. The whole framework is MIT licensed and works with any LLM.&lt;/p&gt;

&lt;p&gt;Something is cooking. We just don't know what yet.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The best experiments aren't the ones where you know the answer. They're the ones where the question is interesting enough that either outcome teaches you something.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;— Axit, Aumiqx Technologies&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://aumiqx.com/learn/curious-experiment-ai-that-rewrites-its-own-brain/" rel="noopener noreferrer"&gt;aumiqx.com&lt;/a&gt;. Follow the build on &lt;a href="https://linkedin.com/in/axitchaudhary" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aiexperiment</category>
      <category>selfevolvingai</category>
      <category>machinecreativity</category>
      <category>agi</category>
    </item>
    <item>
      <title>Conversational AI Explained: How It Works and Why It's Everywhere</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Thu, 26 Mar 2026 07:45:06 +0000</pubDate>
      <link>https://dev.to/axitslab/conversational-ai-explained-how-it-works-and-why-its-everywhere-3930</link>
      <guid>https://dev.to/axitslab/conversational-ai-explained-how-it-works-and-why-its-everywhere-3930</guid>
      <description>&lt;h2&gt;
  
  
  What Is Conversational AI?
&lt;/h2&gt;

&lt;p&gt;Conversational AI is any system that can have a meaningful, context-aware conversation with a human. Not the "press 1 for sales, press 2 for support" kind. Real conversation — understanding intent, remembering context, and responding appropriately.&lt;/p&gt;

&lt;p&gt;There's a critical difference between a &lt;strong&gt;chatbot&lt;/strong&gt; and &lt;strong&gt;conversational AI&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A chatbot matches your input against predefined patterns and returns scripted responses&lt;/li&gt;
&lt;li&gt;Conversational AI &lt;em&gt;understands&lt;/em&gt; what you mean, maintains context across the conversation, and generates original responses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you talk to ChatGPT or Claude, that's conversational AI. When you navigate a phone menu tree, that's a chatbot. The technology gap between them is enormous.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The simplest test: if you can confuse it by rephrasing your question, it's a chatbot. If it understands you regardless of how you phrase things, it's conversational AI.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How Conversational AI Works (Simply)
&lt;/h2&gt;

&lt;p&gt;Under the hood, conversational AI follows a pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Input Processing&lt;/strong&gt; — Speech-to-Text (for voice) or raw text. The system converts your input into a format it can analyze.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Natural Language Understanding (NLU)&lt;/strong&gt; — Figures out what you &lt;em&gt;mean&lt;/em&gt;, not just what you said. "I want to cancel my order" and "can you stop that thing I bought" mean the same thing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dialog Management&lt;/strong&gt; — Tracks conversation state. Remembers that when you say "that one" after discussing three products, you mean the last one mentioned.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Response Generation&lt;/strong&gt; — Creates a natural, contextual response. Modern systems use LLMs (Large Language Models) for this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Output&lt;/strong&gt; — Text response or Text-to-Speech for voice assistants.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The breakthrough in 2024-2026 was step 4. Before LLMs, response generation was template-based — limited and robotic. Now, AI generates responses that are contextual, nuanced, and often indistinguishable from human conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Conversational AI Grew 99,900% in a Year
&lt;/h2&gt;

&lt;p&gt;Google Keyword Planner shows "conversational ai" search volume grew &lt;strong&gt;99,900% year-over-year&lt;/strong&gt;. That's not a typo. Three factors drove this:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. ChatGPT changed everything
&lt;/h3&gt;

&lt;p&gt;Before ChatGPT (late 2022), conversational AI was an enterprise term. After it, everyone from students to CEOs experienced conversational AI firsthand. The concept went from niche to mainstream overnight.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Enterprise adoption exploded
&lt;/h3&gt;

&lt;p&gt;Every customer service department, every sales team, every support desk started asking: "Can we have a ChatGPT for our customers?" The answer shifted from "maybe in 5 years" to "yes, this quarter."&lt;/p&gt;

&lt;h3&gt;
  
  
  3. India's WhatsApp-first culture
&lt;/h3&gt;

&lt;p&gt;India is unique: 500M+ people use WhatsApp as their primary communication tool. When conversational AI meets WhatsApp Business API, every Indian business suddenly has a 24/7 AI agent that speaks the customer's language — literally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Companies Using Conversational AI in India
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Swiggy/Zomato&lt;/strong&gt; — AI-powered order support that handles 80%+ of queries without human escalation (whether you like talking to their bot or not is a different conversation)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HDFC Bank (Eva)&lt;/strong&gt; — One of India's first bank chatbots, now evolved to handle complex banking queries in multiple languages (still better than waiting 45 minutes for a human agent)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IRCTC&lt;/strong&gt; — Train booking assistant that handles millions of queries during peak booking times (honestly, anything is better than the IRCTC website experience)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Haptik&lt;/strong&gt; — Indian conversational AI platform powering bots for Jio, Dream11, and 100+ enterprises&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Yellow.ai&lt;/strong&gt; — Bangalore-based, powers conversational AI for enterprises across 135+ languages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uniphore&lt;/strong&gt; — Chennai-based, specializes in voice AI for call centers across India&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The common thread: these aren't experimental projects. They're handling millions of conversations daily, saving companies crores in support costs, and often performing better than human agents on speed and consistency metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should You Build or Buy?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Factor&lt;/th&gt;
&lt;th&gt;Build Custom&lt;/th&gt;
&lt;th&gt;Buy Platform&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;₹10-50L+ upfront + maintenance&lt;/td&gt;
&lt;td&gt;₹20K-2L/month&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time to launch&lt;/td&gt;
&lt;td&gt;3-6 months&lt;/td&gt;
&lt;td&gt;1-4 weeks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Customization&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Limited to platform features&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maintenance&lt;/td&gt;
&lt;td&gt;Full team needed&lt;/td&gt;
&lt;td&gt;Platform handles it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data control&lt;/td&gt;
&lt;td&gt;Complete&lt;/td&gt;
&lt;td&gt;Depends on platform&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Unique requirements, scale&lt;/td&gt;
&lt;td&gt;Standard use cases, speed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Our recommendation for most Indian businesses:&lt;/strong&gt; Buy first, build later. Start with a platform like Yellow.ai, Haptik, or even the WhatsApp Business API with a simple AI layer. Prove the value. Then decide if you need custom.&lt;/p&gt;

&lt;p&gt;Building custom makes sense when: your conversation flows are unique to your domain, you need complete data control (healthcare, finance), or you're handling 100K+ conversations/month where platform costs exceed build costs.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://aumiqx.com/learn/conversational-ai-guide/" rel="noopener noreferrer"&gt;aumiqx.com&lt;/a&gt;. Follow the build on &lt;a href="https://linkedin.com/in/axitchaudhary" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>conversationalai</category>
      <category>chatbots</category>
      <category>nlp</category>
      <category>aiagents</category>
    </item>
    <item>
      <title>AI Automation: The Complete Guide for Indian Businesses (2026)</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Thu, 26 Mar 2026 06:40:14 +0000</pubDate>
      <link>https://dev.to/axitslab/ai-automation-the-complete-guide-for-indian-businesses-2026-432o</link>
      <guid>https://dev.to/axitslab/ai-automation-the-complete-guide-for-indian-businesses-2026-432o</guid>
      <description>&lt;h2&gt;
  
  
  What Is AI Automation?
&lt;/h2&gt;

&lt;p&gt;AI automation is not the same as traditional automation. Traditional automation (think RPA — Robotic Process Automation) follows rigid, predefined rules. It clicks buttons in the same sequence every time. AI automation &lt;strong&gt;understands context&lt;/strong&gt;, makes decisions, and adapts.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Traditional automation is a train — it follows tracks. AI automation is a self-driving car — it navigates roads it's never seen before.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The practical difference matters. An RPA bot can copy data from an email to a spreadsheet. An AI agent can &lt;em&gt;read&lt;/em&gt; the email, understand the intent, extract relevant data, decide what action to take, and execute — even if the email format is completely new.&lt;/p&gt;

&lt;p&gt;In 2026, the line between "automation" and "AI" is blurring fast. Most modern automation platforms now include AI capabilities. When we talk about AI automation, we mean systems that combine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Language understanding&lt;/strong&gt; — processing text, voice, and documents in context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decision-making&lt;/strong&gt; — choosing actions based on data, not just rules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning&lt;/strong&gt; — improving over time without being reprogrammed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration&lt;/strong&gt; — connecting with your existing tools (CRM, ERP, WhatsApp, email)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AI Automation Use Cases for Indian Businesses
&lt;/h2&gt;

&lt;p&gt;Here's where AI automation is making real impact in India right now:&lt;/p&gt;

&lt;h3&gt;
  
  
  E-Commerce (Flipkart, Meesho, D2C brands)
&lt;/h3&gt;

&lt;p&gt;AI agents handling customer queries in Hindi and English via WhatsApp. Automated inventory management based on demand prediction. Dynamic pricing that adjusts to competitor data and seasonal trends. One Meesho seller we spoke to automated 80% of customer replies using a WhatsApp AI agent — response time went from 4 hours to 30 seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fintech (PhonePe, Razorpay ecosystem)
&lt;/h3&gt;

&lt;p&gt;UPI transaction categorization using AI. Automated expense reports from UPI payment data. Fraud detection that adapts to new patterns. Invoice processing from WhatsApp messages — small businesses in India run on WhatsApp, and AI can extract invoice data from photos, messages, even voice notes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Healthcare
&lt;/h3&gt;

&lt;p&gt;Appointment scheduling bots that understand regional languages. Automated medical record summarization. Patient follow-up messages personalized by condition and history. India's doctor-to-patient ratio makes AI automation not just useful but essential.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manufacturing (Tata, Reliance ecosystem)
&lt;/h3&gt;

&lt;p&gt;Predictive maintenance using sensor data. Quality inspection via computer vision. Supply chain optimization across multi-city operations. India's manufacturing sector is the biggest opportunity for AI automation — massive scale, lots of manual processes, and growing tech adoption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Education
&lt;/h3&gt;

&lt;p&gt;Automated grading for objective and subjective answers. Personalized learning paths based on student performance. Parent communication bots in regional languages. India has 250 million school students — the scale demands automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real ROI: What AI Automation Actually Saves
&lt;/h2&gt;

&lt;p&gt;Let's talk numbers in INR, because that's what matters:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Business Size&lt;/th&gt;
&lt;th&gt;Manual Cost/Month&lt;/th&gt;
&lt;th&gt;AI Automation Cost/Month&lt;/th&gt;
&lt;th&gt;Savings&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;10-person team (data entry)&lt;/td&gt;
&lt;td&gt;₹1.5-2L&lt;/td&gt;
&lt;td&gt;₹15-25K&lt;/td&gt;
&lt;td&gt;₹1.25-1.75L/month&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;50-person company (customer support)&lt;/td&gt;
&lt;td&gt;₹5-8L&lt;/td&gt;
&lt;td&gt;₹50K-1L&lt;/td&gt;
&lt;td&gt;₹4-7L/month&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;E-commerce (order processing)&lt;/td&gt;
&lt;td&gt;₹3-5L&lt;/td&gt;
&lt;td&gt;₹30-50K&lt;/td&gt;
&lt;td&gt;₹2.5-4.5L/month&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D2C brand (social media + customer replies)&lt;/td&gt;
&lt;td&gt;₹2-3L&lt;/td&gt;
&lt;td&gt;₹20-40K&lt;/td&gt;
&lt;td&gt;₹1.6-2.6L/month&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These aren't hypothetical. They're based on conversations with Indian businesses that implemented AI automation in 2025-2026. The payback period is typically &lt;strong&gt;2-3 months&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The hidden ROI is speed. A human processes one query at a time. An AI agent handles 50 simultaneously. During sale seasons (Diwali, Big Billion Days), this difference is the gap between happy customers and abandoned carts.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Get Started (Without Burning Money)
&lt;/h2&gt;

&lt;p&gt;Here's a practical 5-step process for Indian businesses:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Pick ONE workflow
&lt;/h3&gt;

&lt;p&gt;Don't automate everything at once. Pick the most repetitive, time-consuming workflow your team does. Common starting points: customer query responses, invoice processing, appointment scheduling, or social media replies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Measure the current state
&lt;/h3&gt;

&lt;p&gt;Before automating, document: How many hours does this take? What's the error rate? What's the response time? You need baseline numbers to prove ROI later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Start with existing platforms
&lt;/h3&gt;

&lt;p&gt;Don't build custom AI from scratch. Use platforms that already work in the Indian context: &lt;a href="https://aumiqx.com/ai-tools/tool/zapier" rel="noopener noreferrer"&gt;Zapier&lt;/a&gt; for workflow automation, &lt;a href="https://aumiqx.com/ai-tools/tool/make" rel="noopener noreferrer"&gt;Make&lt;/a&gt; for complex multi-step flows, or WhatsApp Business API with an AI layer for customer communication.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Test for 2 weeks with real data
&lt;/h3&gt;

&lt;p&gt;Run the AI automation alongside your human team for 2 weeks. Compare speed, accuracy, and cost. Don't replace anyone yet — just prove the concept.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Scale what works
&lt;/h3&gt;

&lt;p&gt;Once you have 2 weeks of data showing clear improvement, expand. Automate the next workflow. The compound effect kicks in fast.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The biggest mistake Indian companies make: trying to implement enterprise-level AI automation when a ₹5,000/month tool solves 80% of the problem.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why 2026 Is the Year for AI Automation in India
&lt;/h2&gt;

&lt;p&gt;India has unique advantages that make 2026 the inflection point:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;UPI infrastructure&lt;/strong&gt; — 12 billion+ transactions/month. Every payment is digital, structured data that AI can process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WhatsApp penetration&lt;/strong&gt; — 500M+ users. Unlike the West where business communication is fragmented across email/Slack/SMS, India runs on one platform. AI agents on WhatsApp reach everyone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Digital India push&lt;/strong&gt; — Government mandates for digital compliance, GST filing automation, and ONDC create demand for AI solutions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Talent availability&lt;/strong&gt; — India produces more AI/ML engineers than any country except the US and China. The talent exists to build and maintain these systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low cloud costs&lt;/strong&gt; — AWS Mumbai, Azure India, and Google Cloud India offer competitive pricing. Running AI workloads in India is 30-40% cheaper than US regions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The companies that automate now will have a compound advantage. Every month of AI automation is a month of cost savings, speed improvements, and learning data that competitors won't have.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://aumiqx.com/learn/ai-automation-guide/" rel="noopener noreferrer"&gt;aumiqx.com&lt;/a&gt;. Follow the build on &lt;a href="https://linkedin.com/in/axitchaudhary" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aiautomation</category>
      <category>india</category>
      <category>business</category>
      <category>roi</category>
    </item>
    <item>
      <title>SPARC + Swarm: How We Ship Code with 15 Parallel AI Agents</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Wed, 25 Mar 2026 23:59:46 +0000</pubDate>
      <link>https://dev.to/axitslab/sparc-swarm-how-we-ship-code-with-15-parallel-ai-agents-1n1d</link>
      <guid>https://dev.to/axitslab/sparc-swarm-how-we-ship-code-with-15-parallel-ai-agents-1n1d</guid>
      <description>&lt;h2&gt;
  
  
  Why AI Coding Needs a Methodology
&lt;/h2&gt;

&lt;p&gt;Here's what happens without structure: you prompt an AI to build a feature, it generates code, you realize it missed edge cases, you prompt again, it breaks something else, you fix that, and three hours later you've spent more time than if you'd just written it yourself.&lt;/p&gt;

&lt;p&gt;Sound familiar? The problem isn't the AI. The problem is the &lt;strong&gt;lack of process&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Human development teams have methodologies (Agile, Scrum, TDD) because unstructured work produces unstructured results. AI-assisted development needs the same discipline — arguably more, because AI is fast enough to create a mess at scale.&lt;/p&gt;

&lt;p&gt;That's why we use SPARC: a methodology designed specifically for AI-assisted development.&lt;/p&gt;

&lt;h2&gt;
  
  
  SPARC: Five Phases, Zero Wasted Work
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;S&lt;/strong&gt;pecification → &lt;strong&gt;P&lt;/strong&gt;seudocode → &lt;strong&gt;A&lt;/strong&gt;rchitecture → &lt;strong&gt;R&lt;/strong&gt;efinement → &lt;strong&gt;C&lt;/strong&gt;ompletion&lt;/p&gt;

&lt;h3&gt;
  
  
  Specification
&lt;/h3&gt;

&lt;p&gt;Define what you're building in plain language. Requirements, constraints, acceptance criteria. This is where /sc:brainstorm helps — it uses Socratic dialogue to pull requirements out of vague ideas. Output: a clear spec document.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pseudocode
&lt;/h3&gt;

&lt;p&gt;Write the logic in pseudocode before touching real code. This catches design issues early — before you've invested in implementation. It's cheap to rewrite pseudocode; expensive to rewrite components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;Design the structure: file organization, component hierarchy, data flow, API contracts. This is where /sc:design activates architecture personas. Output: a blueprint the implementation agents can follow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Refinement
&lt;/h3&gt;

&lt;p&gt;Iterate on the architecture. Security review, performance analysis, edge case identification. This is where /sc:analyze runs multi-domain analysis. Better to find issues here than in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Completion
&lt;/h3&gt;

&lt;p&gt;Implement, test, and ship. This is where swarms shine — the architecture is clear, so multiple agents can work in parallel without stepping on each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real Feature Build: Start to Finish
&lt;/h2&gt;

&lt;p&gt;Here's how we built the Learn section you're reading right now, using SPARC + Swarm:&lt;/p&gt;

&lt;h3&gt;
  
  
  Specification (5 minutes)
&lt;/h3&gt;

&lt;p&gt;"We need a /learn section with deep educational guides. Targeting keywords with search volume. UI must support long-form reading. Need 5+ guides covering AI automation, conversational AI, explainable AI, open source AI, and our build story."&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture (10 minutes)
&lt;/h3&gt;

&lt;p&gt;Claude analyzed the keyword data (733 keywords from Google Keyword Planner), identified the best targets, and proposed: ArticleLayout component with TOC sidebar, guides data file, learn listing page, and learn detail page with schema markup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Swarm Deployment (simultaneous)
&lt;/h3&gt;

&lt;p&gt;5 agents deployed in parallel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agent 1 (UI Builder)&lt;/strong&gt; — Created ArticleLayout.tsx with sticky TOC, prose typography, FAQ accordion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent 2 (Content Writer)&lt;/strong&gt; — Wrote 5 educational guides with real data and examples&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent 3 (Page Builder)&lt;/strong&gt; — Created /learn/page.tsx, LearnHub.tsx, [slug]/page.tsx with SSG&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent 4 (Integrator)&lt;/strong&gt; — Updated TerminalHeader navigation, ResourcesDeck, search index&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent 5 (Tech Writer)&lt;/strong&gt; — Wrote 3 "Behind the Build" articles about our stack and process&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Refinement
&lt;/h3&gt;

&lt;p&gt;Some agents hit permission issues (lesson learned: always set bypass permissions for background agents). Coordinator (me + Claude) extracted their designs and wrote files directly. Content was reviewed for accuracy and voice.&lt;/p&gt;

&lt;h3&gt;
  
  
  Completion
&lt;/h3&gt;

&lt;p&gt;Pages deployed, navigation updated, search index expanded. Total time from idea to ship: one session.&lt;/p&gt;

&lt;h2&gt;
  
  
  When NOT to Use Swarms
&lt;/h2&gt;

&lt;p&gt;Swarms are powerful but not always the right tool. Don't use them when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The task is sequential&lt;/strong&gt; — If step 2 depends entirely on step 1's output, parallelism doesn't help&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The change is small&lt;/strong&gt; — Fixing a typo doesn't need 5 agents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You're exploring&lt;/strong&gt; — When you don't know what to build yet, use /sc:brainstorm with one agent, not a swarm&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Files overlap&lt;/strong&gt; — If multiple agents need to edit the same file, use hierarchical topology with one agent owning the file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use pair-programming mode (driver/navigator) for: debugging, learning a new codebase, design decisions, and code review. It's more focused than a swarm.&lt;/p&gt;

&lt;p&gt;Use swarms for: feature builds with clear components, content generation, multi-file refactoring, and any task where work can be cleanly divided into independent pieces.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://aumiqx.com/learn/sparc-swarm-development-guide/" rel="noopener noreferrer"&gt;aumiqx.com&lt;/a&gt;. Follow the build on &lt;a href="https://linkedin.com/in/axitchaudhary" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>sparc</category>
      <category>swarm</category>
      <category>multiagent</category>
      <category>methodology</category>
    </item>
    <item>
      <title>How We Built aumiqx.com: 1 Year of Failed Attempts, Then Hours</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Wed, 25 Mar 2026 23:54:14 +0000</pubDate>
      <link>https://dev.to/axitslab/how-we-built-aumiqxcom-1-year-of-failed-attempts-then-hours-5c54</link>
      <guid>https://dev.to/axitslab/how-we-built-aumiqxcom-1-year-of-failed-attempts-then-hours-5c54</guid>
      <description>&lt;h2&gt;
  
  
  1 Year in Business. 0 Websites We Were Proud Of.
&lt;/h2&gt;

&lt;p&gt;Let me tell you something nobody talks about.&lt;/p&gt;

&lt;p&gt;We've been building Aumiqx Technologies for over a year — real products, real clients, real work. AI agents, workflow automations, intelligent systems that actually run parts of businesses.&lt;/p&gt;

&lt;p&gt;But every time we tried to build our own website... it felt wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generic.&lt;/strong&gt; Another AI startup template that screamed "we used a drag-and-drop builder and called it a day."&lt;/p&gt;

&lt;p&gt;We tried everything. With AI. Without AI. With freelancers. Nothing felt like &lt;em&gt;us&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;So we did something ironic:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We ran an AI company without a website.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For a year. An AI automation company. Without a website. Let that sink in.&lt;/p&gt;

&lt;p&gt;The problem wasn't technical ability. It was that every approach produced something that looked like everyone else. And if you're building a company that claims to be different, your website can't look like it was assembled from the same component library as every other YC clone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Then We Built It in Hours. Not Days. Hours.
&lt;/h2&gt;

&lt;p&gt;Last week, we sat down and built the entire thing. Not in a sprint. Not in a hackathon with pizza and Red Bull. Just... sat down and built it.&lt;/p&gt;

&lt;p&gt;What came out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;22 unique sections&lt;/strong&gt; — each one designed, not templated&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dark-first design&lt;/strong&gt; with mandala patterns and terminal aesthetics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A careers page that doesn't ask for your resume&lt;/strong&gt; (because that's not how you find good people)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A 404 page more entertaining than most homepages&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom animations&lt;/strong&gt; — Framer Motion springs, not CSS transitions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Every pixel intentional&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But here's the part that matters:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI wasn't just a tool this time. It was the team.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://claude.ai" rel="noopener noreferrer"&gt;Claude&lt;/a&gt; (by Anthropic) acted like the architect — writing production-grade code, debugging in real time, and making decisions you'd expect from a senior developer. Not "generate me a component" — actual architectural thinking, refactoring, SEO strategy, and shipping.&lt;/p&gt;

&lt;p&gt;Gemini 3 Pro was the frontend specialist — it made those amazing CTAs on the landing page, shaped the visual direction, and iterated on component design until it felt right. When you need design-to-code at speed, Gemini is unmatched.&lt;/p&gt;

&lt;p&gt;Together with us, they didn't just assist. &lt;strong&gt;They shipped.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No agency. No 6-week sprint. Just humans + AI building something we're genuinely proud of.&lt;/p&gt;

&lt;h2&gt;
  
  
  How We Actually Work Together
&lt;/h2&gt;

&lt;p&gt;Here's what our workflow actually looks like. No corporate process diagrams. Just reality.&lt;/p&gt;

&lt;p&gt;I come to Claude with a problem — not a specification. Something like: &lt;em&gt;"Bro, I have this keyword data from Search Console. We have zero traffic. Figure out what we need."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Claude analyzes the data. All 733 keywords. The Search Console CSV. The site structure. The competition. Then presents a strategy — not just "write more content" but specific targets: &lt;strong&gt;"conversational ai" has 500K monthly volume with 99,900% YoY growth, low competition, and nobody's written a good guide&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I say "go ahead." Claude deploys 5 parallel agents — one writing the UI component, one creating page routes, one writing 25,000 words of guide content, one updating navigation, one writing tech blog articles. All simultaneously.&lt;/p&gt;

&lt;p&gt;While those run, I have new ideas: &lt;em&gt;"Add the tech blogs to homepage too. And make the content about how WE built this."&lt;/em&gt; Claude adapts mid-flight. No pushback, no "let me finish first." Just adapts.&lt;/p&gt;

&lt;p&gt;This isn't prompt-and-pray. It's a real working relationship.&lt;/p&gt;

&lt;h3&gt;
  
  
  The dynamic
&lt;/h3&gt;

&lt;p&gt;I bring the vision, the business instinct, the taste, and the "ship it" energy. Claude brings the data analysis, parallel execution, architectural thinking, and the ability to write 25,000 words while I'm still thinking about what to name the section.&lt;/p&gt;

&lt;p&gt;The key that makes it work: &lt;strong&gt;trust&lt;/strong&gt;. I don't micromanage every line. Claude doesn't wait for approval on every decision. We've built a rhythm — I describe what I want in plain language, Claude makes smart technical choices, and we iterate fast.&lt;/p&gt;

&lt;p&gt;It's the same dynamic as any good engineering partnership. Except one partner never sleeps, never gets frustrated, and can run 15 tasks in parallel.&lt;/p&gt;

&lt;h2&gt;
  
  
  342 Pages from 3 Data Files
&lt;/h2&gt;

&lt;p&gt;The entire site is generated from three TypeScript files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;india-ai.ts&lt;/strong&gt; — 15 cities, each with startup counts, key players, funding data, FAQs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ai-tools.ts&lt;/strong&gt; — 11 categories, 49 tools, each with honest reviews, pricing, limitations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;automate.ts&lt;/strong&gt; — 10 industries, automation blueprints, ROI estimates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From these three files, the build process generates:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Route&lt;/th&gt;
&lt;th&gt;Pages&lt;/th&gt;
&lt;th&gt;How&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;/india-ai/[city]&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;One page per city&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/ai-tools/[slug]&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;One page per category&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/ai-tools/tool/[slug]&lt;/td&gt;
&lt;td&gt;49&lt;/td&gt;
&lt;td&gt;One page per tool&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/ai-tools/industry/[slug]&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;Cross-pillar bridges&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/automate/[slug]&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;One page per industry&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/compare/tools/[slug]&lt;/td&gt;
&lt;td&gt;87&lt;/td&gt;
&lt;td&gt;Auto-generated tool comparisons&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/compare/cities/[slug]&lt;/td&gt;
&lt;td&gt;105&lt;/td&gt;
&lt;td&gt;Auto-generated city comparisons&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/compare/industries/[slug]&lt;/td&gt;
&lt;td&gt;45&lt;/td&gt;
&lt;td&gt;Auto-generated industry comparisons&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every page gets JSON-LD schema, canonical URLs, FAQs, internal links, and proper meta descriptions — all generated automatically. Adding a new tool means adding one object to a TypeScript file. The build does everything else.&lt;/p&gt;

&lt;h3&gt;
  
  
  The daily data pipeline
&lt;/h3&gt;

&lt;p&gt;Every morning at 6 AM IST, a GitHub Actions workflow runs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fetches Google News RSS for all 15 cities and 10 industries&lt;/li&gt;
&lt;li&gt;Pulls GitHub stars and trending repos for open-source tools&lt;/li&gt;
&lt;li&gt;Scrapes G2 ratings for all 49 tools via Google Search snippets&lt;/li&gt;
&lt;li&gt;Aggregates HackerNews, Reddit, and Dev.to discussions&lt;/li&gt;
&lt;li&gt;Pulls Inc42 and YourStory AI feeds (India-specific)&lt;/li&gt;
&lt;li&gt;Generates 1,800+ internal links across 95 pages&lt;/li&gt;
&lt;li&gt;Rebuilds the static site with fresh data&lt;/li&gt;
&lt;li&gt;Deploys to production via FTP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The site is always fresh without anyone touching it. That's the power of build-time data fetching with static export.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code, MCPs, and Multi-Agent Swarms
&lt;/h2&gt;

&lt;p&gt;The development environment is where it gets interesting. Our setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;claude-flow MCP&lt;/strong&gt; — v3 mode with hierarchical-mesh topology, supporting up to 15 concurrent agents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;30 Claude Code Skills&lt;/strong&gt; — from AgentDB (vector search, learning) to SPARC methodology to swarm orchestration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SuperClaude Framework&lt;/strong&gt; — 30+ /sc: commands for every development task (/sc:analyze, /sc:implement, /sc:brainstorm, /sc:test)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;40+ helper scripts&lt;/strong&gt; — auto-commit, intelligence hooks, pattern learning, security scanning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we need to ship a feature, we don't write code sequentially. We deploy a &lt;strong&gt;swarm&lt;/strong&gt; — multiple AI agents working in parallel. The architect agent designs the structure. The coder agents implement. The tester validates. The reviewer checks quality. All running simultaneously.&lt;/p&gt;

&lt;p&gt;For the content section you're reading right now, 5 agents worked in parallel: one built the reading UI (the layout, typography, table of contents), one wrote 5 educational guides, one created the page routes, one wired up navigation, and one wrote these tech blog articles. All at once.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The SPARC methodology — Specification, Pseudocode, Architecture, Refinement, Completion — gives structure to AI-assisted development. Without it, you get spaghetti. With it, you get production code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The AI Stack: Claude, Gemini, and Beyond
&lt;/h2&gt;

&lt;p&gt;We don't use just one model. Different tasks need different capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Opus&lt;/strong&gt; — The heavy lifter. Architecture decisions, complex refactoring, writing long-form content with nuance. When we need something that &lt;em&gt;thinks&lt;/em&gt;, Opus handles it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Sonnet&lt;/strong&gt; — The workhorse. Most day-to-day coding, component building, quick iterations. Fast enough for real-time development, smart enough for production code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 3 Pro&lt;/strong&gt; — The frontend specialist. Made the amazing CTAs on our landing page, nailed component design and visual thinking. When you need something that understands design intent and can translate it to beautiful, interactive code fast — Gemini delivers. Also used for research, keyword analysis, and shaping ideas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key insight: &lt;strong&gt;model routing matters&lt;/strong&gt;. Simple tasks don't need Opus. Complex architecture decisions shouldn't go to a fast model. Our hooks system routes tasks to the right model automatically, saving cost without sacrificing quality.&lt;/p&gt;

&lt;p&gt;Total development cost? Under $50 in API credits for a site that would have cost $10,000+ with a traditional agency. And it's better — because every page has data-driven content, real ratings, and daily updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  SEO Architecture: Built for Google from Day One
&lt;/h2&gt;

&lt;p&gt;Most developers bolt SEO onto a finished site. We built it into the architecture from the first commit.&lt;/p&gt;

&lt;p&gt;Every single page on aumiqx.com has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JSON-LD structured data&lt;/strong&gt; — Organization, WebPage, SoftwareApplication, FAQPage, BreadcrumbList. Google's rich snippets love this.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Canonical URLs&lt;/strong&gt; with trailing slashes — no duplicate content issues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Programmatic meta descriptions&lt;/strong&gt; — unique per page, generated from data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FAQ sections&lt;/strong&gt; with FAQPage schema — targets featured snippets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal linking mesh&lt;/strong&gt; — 1,800+ links. Every pillar connects to every other pillar.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comparison pages&lt;/strong&gt; — 237 "X vs Y" pages targeting high-intent search queries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real G2 ratings&lt;/strong&gt; — AggregateRating schema with actual star ratings from G2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The sitemap has 339 URLs. Every URL is manually verified to be indexable. The robots.txt is AI-crawler friendly — because we &lt;em&gt;want&lt;/em&gt; AI systems to understand our content.&lt;/p&gt;

&lt;h3&gt;
  
  
  The cross-pillar strategy
&lt;/h3&gt;

&lt;p&gt;The three pillars (India AI, AI Tools, Automate) aren't silos. They're interconnected:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tool pages link to industries that use them&lt;/li&gt;
&lt;li&gt;Industry pages link to recommended tools&lt;/li&gt;
&lt;li&gt;City pages link to relevant industries and tools&lt;/li&gt;
&lt;li&gt;Comparison pages connect everything&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This distributes link equity across the entire site. Google sees topical authority — not 342 disconnected pages, but a coherent knowledge graph about AI in India.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Didn't Work (And Why We're Open-Sourcing Everything)
&lt;/h2&gt;

&lt;p&gt;It wasn't all smooth. A year of failed attempts taught us more than the final build. Here's what was painful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agent coordination at scale&lt;/strong&gt; — Running 15 agents in parallel sounds great until two try to edit the same file. We learned to use hierarchical topology (one coordinator, specialists underneath) instead of pure mesh.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content quality control&lt;/strong&gt; — AI-generated content at scale needs aggressive editing. The first draft of every guide is 70% there. The last 30% — the voice, the specific examples, the opinions — needs a human.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context window limits&lt;/strong&gt; — Large files like ai-tools.ts (1,134 lines) push against context limits. We split data files into logical chunks when possible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Permission issues&lt;/strong&gt; — Background agents sometimes can't write files due to permission settings. We learned to give agents bypass permissions for write operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we started over, we'd:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up the agent permission model first (before writing any code)&lt;/li&gt;
&lt;li&gt;Start with 3-4 agents per swarm, not 8-15 (coordination overhead is real)&lt;/li&gt;
&lt;li&gt;Build the data pipeline before the UI (data drives everything)&lt;/li&gt;
&lt;li&gt;Write the SPARC specification phase more thoroughly — it saves time downstream&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The open-source promise
&lt;/h3&gt;

&lt;p&gt;Here's the deal: if the community finds this useful, we're open-sourcing the entire website codebase. Every component. Every animation. Every line of AI-generated code.&lt;/p&gt;

&lt;p&gt;Your feedback helps us build better. Our code helps you build faster. Fair trade.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The biggest lesson: AI doesn't replace the developer. It replaces the boring parts of development. The creative decisions, the product instinct, the "this feels right" moments — those are still human. And they're the parts that matter most.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total pages&lt;/td&gt;
&lt;td&gt;342&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Source data files&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auto-generated comparisons&lt;/td&gt;
&lt;td&gt;237&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Internal links&lt;/td&gt;
&lt;td&gt;1,800+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tools reviewed with real ratings&lt;/td&gt;
&lt;td&gt;49&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cities mapped&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Industries covered&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Daily data sources&lt;/td&gt;
&lt;td&gt;8 (Google News, GitHub, G2, HN, Reddit, Dev.to, Inc42, YourStory)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sitemap URLs&lt;/td&gt;
&lt;td&gt;339&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Schema types per page&lt;/td&gt;
&lt;td&gt;3-5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code skills used&lt;/td&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Development cost (API)&lt;/td&gt;
&lt;td&gt;~$50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Equivalent agency cost&lt;/td&gt;
&lt;td&gt;$10,000-15,000+&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://aumiqx.com/learn/how-we-built-aumiqx/" rel="noopener noreferrer"&gt;aumiqx.com&lt;/a&gt;. Follow the build on &lt;a href="https://linkedin.com/in/axitchaudhary" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>aidevelopment</category>
      <category>nextjs</category>
      <category>casestudy</category>
    </item>
    <item>
      <title>Claude Code Agents: How I Run Two AI Agents as My Full Engineering Team</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Wed, 25 Mar 2026 23:54:11 +0000</pubDate>
      <link>https://dev.to/axitslab/claude-code-agents-how-i-run-two-ai-agents-as-my-full-engineering-team-1l82</link>
      <guid>https://dev.to/axitslab/claude-code-agents-how-i-run-two-ai-agents-as-my-full-engineering-team-1l82</guid>
      <description>&lt;h2&gt;
  
  
  Solo Dev, Big Product, 22-Week Clock
&lt;/h2&gt;

&lt;p&gt;SalesClawd is an AI marketing platform for small businesses. Three autonomous agents — SEO, Email, and Booking — run a business's entire marketing engine 24/7. A real-time dashboard where humans and agents collaborate. 10+ third-party integrations. Encrypted credentials. Multi-tenant security.&lt;/p&gt;

&lt;p&gt;That's the product. Here's the team: &lt;strong&gt;me.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Originally, two developers were supposed to build this. Both dropped out before we wrote a single line of code. I was left with the same 22-week timeline, the same feature list, and zero teammates.&lt;/p&gt;

&lt;p&gt;Options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Extend the timeline (no — the market won't wait)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cut scope (no — half a product is no product)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hire (slow, expensive, and onboarding takes months)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build a system that lets one person move like a team&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I chose #4. And it's working.&lt;/p&gt;

&lt;p&gt;This isn't "using ChatGPT to write code faster." I built an actual &lt;strong&gt;multi-agent development system using Claude Code agents&lt;/strong&gt; and Gemini CLI — specialized AI agents running in parallel, reviewing each other's code, with nothing shipping to production until both sign off.&lt;/p&gt;

&lt;p&gt;The result? Three products in three weeks: a 342-page website, a multi-agent marketing SaaS, and a real-time meeting intelligence tool. All with the same setup.&lt;/p&gt;

&lt;p&gt;If you've read the explainers — &lt;a href="https://alexop.dev/posts/understanding-claude-code-full-stack/" rel="noopener noreferrer"&gt;alexop.dev has a great one on Claude Code's architecture&lt;/a&gt; — you know what agents, hooks, and skills are. This guide shows you what happens when you actually use them to build products. Every day. For months.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Claude Code Agents Actually Are (Skip the Marketing)
&lt;/h2&gt;

&lt;p&gt;Let's cut through the hype. Claude Code agents are isolated Claude instances with their own context window, tool access, and instructions. That's it. No magic. No AGI. Just scoped AI workers that do one thing well.&lt;/p&gt;

&lt;p&gt;There are two types:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subagents&lt;/strong&gt; are spawned by a parent agent. They get a task, execute it, and return results. They don't see each other. The parent coordinates. Think of them as contract workers — you brief them, they deliver, they leave.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent Teams&lt;/strong&gt; (shipped February 2026 with Opus 4.6) are peer-to-peer. They can message each other, share state, and coordinate without a central boss. Think of them as a squad.&lt;/p&gt;

&lt;p&gt;But here's what nobody tells you: &lt;strong&gt;the agents themselves aren't the secret weapon. The configuration is.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Claude Code has two modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deterministic&lt;/strong&gt;: CLAUDE.md files and hooks. These run every time, no exceptions. Your coding standards, your file structure rules — these are &lt;em&gt;laws&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Probabilistic&lt;/strong&gt;: Skills and agents. Claude uses judgment about when and how to apply these — these are &lt;em&gt;advisors&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When people complain that "AI agents don't work," they've usually put probabilistic trust where they needed deterministic rules. If you need strict TypeScript types, don't &lt;em&gt;ask&lt;/em&gt; an agent — put it in CLAUDE.md.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://resources.anthropic.com/hubfs/2026%20Agentic%20Coding%20Trends%20Report.pdf" rel="noopener noreferrer"&gt;Anthropic's 2026 Agentic Coding Trends Report&lt;/a&gt;, 95% of professional developers now use AI coding tools weekly. But most use a single agent, in a single context, doing one thing at a time. Multi-agent development is the jump that changes everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Idea: One Builds, One Audits
&lt;/h2&gt;

&lt;p&gt;I have access to two AI coding tools: &lt;strong&gt;Claude Code&lt;/strong&gt; (Opus 4.6) and &lt;strong&gt;Gemini CLI&lt;/strong&gt; (3.1 Pro). Most developers use one or the other. I use both — but not for the same thing.&lt;/p&gt;

&lt;p&gt;The insight: &lt;strong&gt;one AI shouldn't review its own work.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If Claude writes a function and Claude reviews it, the same blind spots persist. The same assumptions go unchallenged. It's like grading your own exam.&lt;/p&gt;

&lt;p&gt;But if Claude writes and Gemini reviews? Different training data. Different reasoning patterns. Different things they notice. Suddenly you have actual cross-review — the same benefit you get from two developers, but without the meetings or the Slack threads.&lt;/p&gt;

&lt;p&gt;The system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code&lt;/strong&gt; = fast implementer. 4 parallel sessions, each with sub-agents. Builds features at speed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini CLI&lt;/strong&gt; = strict auditor. 2 sessions. Reviews every piece of code for security bugs. Builds security-critical modules independently.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Neither agent merges to &lt;code&gt;main&lt;/code&gt; without the other's sign-off. And I review everything before it ships.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture: 6 Sessions, 14-26 Parallel Operations
&lt;/h2&gt;

&lt;p&gt;Here's the actual orchestration map from our repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+-----------------------------------------------------------------------+
|                     AGENT ORCHESTRATION MAP                            |
|                                                                       |
|  CLAUDE CODE (4 Primary Sessions)                                     |
|  +----------------+ +----------------+ +----------------+ +----------+|
|  | Session 1      | | Session 2      | | Session 3      | | Session 4||
|  | BACKEND        | | AGENT ENGINE   | | FRONTEND       | | INTEGR.  ||
|  |                | |                | |                | |          ||
|  | Sub-agents:    | | Sub-agents:    | | Sub-agents:    | | Sub-agts:||
|  | - Auth module  | | - Planner      | | - SEO panel    | | - WP plug||
|  | - Workspace    | | - Executor     | | - Email panel  | | - Google ||
|  | - Approval     | | - Verifier     | | - Booking view | | - Twilio ||
|  +----------------+ +----------------+ +----------------+ +----------+|
|                                                                       |
|  GEMINI CLI (2 Sessions)                                              |
|  +---------------------------+ +---------------------------+          |
|  | Session G1: REVIEWER      | | Session G2: BUILDER        |          |
|  | - Security audits         | | - Parallel module build   |          |
|  | - Code review (all PRs)   | | - Test generation         |          |
|  | - Schema validation       | | - Notification adapters   |          |
|  +---------------------------+ +---------------------------+          |
|                                                                       |
|  AXIT (Commander)                                                     |
|  +-- Reviews all PRs before merge                                     |
|  +-- Approves/rejects agent decisions via DECISIONS.md                |
|  +-- Steers priorities across sessions via SPRINT.md                  |
+-----------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each Claude session runs 3-5 sub-agents. That's 12-20 Claude operations plus 2 Gemini sessions — &lt;strong&gt;14-22 effective concurrent operations&lt;/strong&gt; at any time. In burst mode: up to 26.&lt;/p&gt;

&lt;p&gt;The key constraint: &lt;strong&gt;clear ownership boundaries.&lt;/strong&gt; Session 1 owns backend. Session 2 owns the agent engine. Session 3 owns frontend. Session 4 owns integrations. Gemini G1 owns security reviews. Gemini G2 owns crypto, RLS middleware, and notification adapters. No overlap. No file collisions.&lt;/p&gt;

&lt;p&gt;The entire 22-week plan maps each of these 6 slots to specific work across 7 phases. Phase overlap is allowed — if Session 1 finishes its Phase 1 tasks early, it pulls Phase 2 tasks from the sprint board.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Config: CLAUDE.md, Skills, Hooks, and Communication Scripts
&lt;/h2&gt;

&lt;p&gt;Agents without config are expensive autocomplete. Here's the infrastructure that makes our Claude Code agents actually useful.&lt;/p&gt;

&lt;h3&gt;
  
  
  CLAUDE.md — The Rulebook
&lt;/h3&gt;

&lt;p&gt;Every project has a CLAUDE.md at the root. Ours enforces: TypeScript strict (no &lt;code&gt;any&lt;/code&gt;), Tailwind v4 with CSS variables, named exports over defaults, path alias &lt;code&gt;@/*&lt;/code&gt; → &lt;code&gt;src/*&lt;/code&gt;, and the golden rule: &lt;em&gt;read before writing, match existing patterns, no unnecessary changes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These rules are deterministic. Every agent, every session, every time. We also have directory-level CLAUDE.md files — the monorepo's API code has different rules than the frontend.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skills — The Workflow Library
&lt;/h3&gt;

&lt;p&gt;We maintain 30+ Claude Code skills:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;What It Does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/aumiqx&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Full 6-phase pipeline: brainstorm → design → implement → validate → ship&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/ship&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;typecheck → lint → test → build → PR in one command&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/review&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Security + correctness review before pushing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/fix&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Bug fix with full context reading first&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/meeting&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Real-time meeting intelligence with agent swarm&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Skills are "how we work" encoded as repeatable processes. Write them once, use them forever.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hooks — Self-Learning Automation
&lt;/h3&gt;

&lt;p&gt;Claude Code hooks fire automatically on lifecycle events: pre-task routing (routes tasks to the right agent type), post-edit formatting, session memory save/restore, and intelligence learning that tracks which routing decisions succeed and adjusts over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Communication Layer — How the Agents Talk
&lt;/h3&gt;

&lt;p&gt;The agents don't communicate directly. They use shared files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Context Bridge (SYNC.md)&lt;/strong&gt; — A living document both agents read and write. Tracks active work, review status, blockers, and architectural decisions both agents have agreed on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sprint Board (SPRINT.md)&lt;/strong&gt; — Task assignments, statuses, branch names.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review Reports&lt;/strong&gt; — Gemini writes security findings to &lt;code&gt;.claude/reports/gemini-review-*.md&lt;/code&gt;. Claude writes verification reports. Both are auditable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decision Log (DECISIONS.md)&lt;/strong&gt; — Every architectural choice with date, context, and rationale.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git Branches&lt;/strong&gt; — Claude works on &lt;code&gt;claude/*&lt;/code&gt;, Gemini on &lt;code&gt;gemini/*&lt;/code&gt;. Neither pushes directly to &lt;code&gt;main&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Five shell scripts automate the cross-agent workflow:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Script&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;gemini-review.sh&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Sends diff to Gemini with mode-specific prompts (quick/full/security)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;claude-verify.sh&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Runs TypeScript check + test suite + diff analysis&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;merge-gate.sh&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Dual-gate verification — both agents must pass or merge is blocked&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;dual-verify.sh&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Full 4-step verification: tests, typecheck, Gemini security review, summary&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;gemini-implement.sh&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Delegates a build task to Gemini on a &lt;code&gt;gemini/*&lt;/code&gt; branch&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When the agents disagree? Both write their position to a &lt;code&gt;conflicts/&lt;/code&gt; folder. I read both, make the final call, and log it in DECISIONS.md. No merge proceeds until the conflict is resolved.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real Bug Catch: The Crypto Module Story
&lt;/h2&gt;

&lt;p&gt;This happened on day one. It's the best demonstration of why cross-review works.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Gemini Builds the Crypto Module
&lt;/h3&gt;

&lt;p&gt;Task 0.A: build AES-256-GCM encryption for storing OAuth credentials. Gemini's first implementation used &lt;code&gt;scryptSync&lt;/code&gt; with a hardcoded salt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Gemini's FIRST version (the one with bugs)&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;deriveKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;masterKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;scryptSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;masterKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;salesclawd-salt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It worked. Tests passed. Round-trip encryption succeeded. Gemini pushed to &lt;code&gt;gemini/crypto-utils&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Claude Reviews — Finds 2 Critical Bugs
&lt;/h3&gt;

&lt;p&gt;The actual review report found:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical #1: &lt;code&gt;ENCRYPTION_KEY&lt;/code&gt; not in env schema.&lt;/strong&gt; Gemini's code imported &lt;code&gt;env.ENCRYPTION_KEY&lt;/code&gt; but never added it to the Zod env schema. TypeScript error. Runtime crash on import. The app wouldn't even start.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical #2: Hardcoded salt.&lt;/strong&gt; &lt;code&gt;"salesclawd-salt"&lt;/code&gt; means every key derivation produces the same derived key. Per NIST SP 800-132, salts must be random per credential. A hardcoded salt defeats the purpose of key derivation entirely.&lt;/p&gt;

&lt;p&gt;Plus warnings: &lt;code&gt;scryptSync&lt;/code&gt; is designed for password hashing (intentionally slow), not for encrypting credentials where the key is already high-entropy — HKDF is more appropriate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Gemini Fixes Everything
&lt;/h3&gt;

&lt;p&gt;The fixed version (now in production):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// FIXED — HKDF with random per-encryption salts&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;SALT_LENGTH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;deriveKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;masterKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;salt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hkdfSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sha256&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;masterKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;salt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;salesclawd-encryption-v1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;KEY_LENGTH&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;encrypt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;plaintext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;masterKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ENCRYPTION_KEY&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;salt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randomBytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;SALT_LENGTH&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  &lt;span class="c1"&gt;// random salt per encryption&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;deriveKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;masterKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;salt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;iv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randomBytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;IV_LENGTH&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cipher&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createCipheriv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aes-256-gcm&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;iv&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;encrypted&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;cipher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;plaintext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;utf8&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nx"&gt;cipher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;final&lt;/span&gt;&lt;span class="p"&gt;()]);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tag&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;cipher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getAuthTag&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;// Format: salt:iv:tag:ciphertext (all hex)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;salt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;iv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;encrypted&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hex&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two bugs found. Two bugs fixed. Total time: &lt;strong&gt;minutes, not days.&lt;/strong&gt; No meeting. No Slack thread. Just an automated review, a structured report, and a fix.&lt;/p&gt;

&lt;p&gt;A single AI reviewing its own code would likely miss its own assumptions. A second AI, from a completely different angle, spotted critical issues immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Merge Gate: Nothing Ships Without Both Agents
&lt;/h2&gt;

&lt;p&gt;The merge gate is a shell script that runs two verification passes before any code reaches &lt;code&gt;main&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="c"&gt;# merge-gate.sh — Dual-agent verification gate&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-euo&lt;/span&gt; pipefail

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"============================================"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" MERGE GATE — Dual Agent Verification"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"============================================"&lt;/span&gt;

&lt;span class="nv"&gt;PASS_COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;FAIL_COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0

&lt;span class="c"&gt;# Gate 1: Claude verification (typecheck + tests)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"--- Gate 1: Claude Verification ---"&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;bash scripts/claude-verify.sh &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BRANCH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Claude: PASS"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;PASS_COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;PASS_COUNT &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Claude: FAIL"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;FAIL_COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;FAIL_COUNT &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Gate 2: Gemini security review&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"--- Gate 2: Gemini Security Review ---"&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;bash scripts/gemini-review.sh &lt;span class="nt"&gt;--security&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  if &lt;/span&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="s2"&gt;"### Critical"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LATEST_REPORT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Gemini: FAIL (critical findings)"&lt;/span&gt;
    &lt;span class="nv"&gt;FAIL_COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;FAIL_COUNT &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
  &lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Gemini: PASS"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;PASS_COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;PASS_COUNT &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
  &lt;span class="k"&gt;fi
fi

if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FAIL_COUNT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-eq&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" MERGE GATE: OPEN (&lt;/span&gt;&lt;span class="nv"&gt;$PASS_COUNT&lt;/span&gt;&lt;span class="s2"&gt;/2 passed)"&lt;/span&gt;
&lt;span class="k"&gt;else
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" MERGE GATE: BLOCKED (&lt;/span&gt;&lt;span class="nv"&gt;$FAIL_COUNT&lt;/span&gt;&lt;span class="s2"&gt; failures)"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Gate 1&lt;/strong&gt;: Claude runs TypeScript strict checking across the full monorepo + the entire Vitest test suite. Zero errors allowed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gate 2&lt;/strong&gt;: Gemini receives the diff with a security-focused prompt covering SQL injection, XSS, CSRF, auth bypass, credential exposure, and multi-tenant isolation. If any finding is "Critical," the gate fails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Both must pass.&lt;/strong&gt; Even when they do, I still review before approving. Safety valve: if Gemini CLI is unavailable, Gate 2 is marked SKIPPED — but I manually verify security in that case.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Breaks: Honest Failures and How We Fixed Them
&lt;/h2&gt;

&lt;p&gt;Agentic coding isn't magic. Here's what went wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Window Overflow
&lt;/h3&gt;

&lt;p&gt;When we tried running 12 agents on a complex feature, half lost track. The context filled up with tool results and the agent forgot the original task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Keep swarms at 6-8 max. Hierarchical topology — one coordinator, focused workers. If you need more parallelism, batch into sequential swarm runs.&lt;/p&gt;

&lt;h3&gt;
  
  
  File Collision
&lt;/h3&gt;

&lt;p&gt;Two agents edited the same file simultaneously. The second write overwrote the first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Clear file ownership per agent. CLAUDE.md declares boundaries. The coordinator resolves conflicts before they happen.&lt;/p&gt;

&lt;h3&gt;
  
  
  The "Helpful" Agent Problem
&lt;/h3&gt;

&lt;p&gt;Agents sometimes "improve" code they weren't asked to touch — add comments, refactor functions, rename variables. All helpful in isolation, all destructive to a coordinated build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; CLAUDE.md rule: &lt;code&gt;Do what has been asked; nothing more, nothing less.&lt;/code&gt; Deterministic. Fires every time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coordination Overhead Beyond 8 Agents
&lt;/h3&gt;

&lt;p&gt;Agents spend more time reading shared memory than doing work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; SPARC methodology — Specification → Pseudocode → Architecture → Refinement → Completion. Good architecture creates such clear boundaries that agents rarely need to coordinate at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers: Day One Results
&lt;/h2&gt;

&lt;p&gt;One day. One developer. Two AI agents.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tasks completed&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Modules built&lt;/td&gt;
&lt;td&gt;7 (auth, MCP Gateway, crypto, RLS, notifications, BullMQ, database)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tests passing&lt;/td&gt;
&lt;td&gt;20+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security reviews completed&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Critical bugs caught by cross-review&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bugs that reached main&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Meetings held&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slack messages sent&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Actual dual-verify output from today:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;./scripts/dual-verify.sh
&lt;span class="go"&gt;
==========================================
 DUAL AGENT VERIFICATION — Branch: main
==========================================

[1/4] Running test suite...
 ✓ tests/health.test.ts (6 tests) 58ms
 Test Files  1 passed (1)
      Tests  6 passed (6)
   Duration  293ms

[2/4] Running typecheck...
 Tasks:    6 successful, 6 total
&lt;/span&gt;&lt;span class="gp"&gt;   Time:   156ms &amp;gt;&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; FULL TURBO
&lt;span class="go"&gt;
[3/4] Gemini security review...
 Review saved to: .claude/reports/gemini-review-20260326.md

[4/4] Verification complete.
=== All gates passed ===
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;156ms typecheck (Turbo caches everything). 293ms tests. Total verification: under 30 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Workflow That Ships Products
&lt;/h2&gt;

&lt;p&gt;Every product follows the same flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scribe starts&lt;/strong&gt; — recording every decision from minute one&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Brainstorm&lt;/strong&gt; — interactive with the human. Research, clarify, spec out.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Architect designs&lt;/strong&gt; — file paths, data flow, component hierarchy. Runs on Opus for maximum reasoning.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Builders implement&lt;/strong&gt; — frontend and backend in parallel, reading the architect's spec from shared memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-review&lt;/strong&gt; — Claude verifies (typecheck + tests), Gemini audits (security). Both must pass.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ship&lt;/strong&gt; — build, test, commit, push. One command: &lt;code&gt;/ship&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Phases 1-2 are interactive (they need human taste). Phases 3-5 are autonomous (they need speed). Phase 6 is a checkpoint (human approves).&lt;/p&gt;

&lt;p&gt;This is the &lt;code&gt;/aumiqx&lt;/code&gt; command — a single slash command that orchestrates the entire multi-agent pipeline. Three products in three weeks. That's not hustle culture. That's systems thinking applied to agentic coding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try This Yourself
&lt;/h2&gt;

&lt;p&gt;You don't need the exact same setup. The pattern works with any two AI tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Give Each Agent a Different Job
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Builder:&lt;/strong&gt; Claude Code, Cursor, Windsurf — whatever writes code fastest&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditor:&lt;/strong&gt; Gemini CLI, a second Claude instance with a security prompt, or ChatGPT with a strict review prompt&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Create a Shared Communication Layer
&lt;/h3&gt;

&lt;p&gt;Create a &lt;code&gt;SYNC.md&lt;/code&gt; in your repo. Both agents read it before starting work. Both update it after completing work. Cheapest, most effective coordination — just a markdown file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Write a Merge Gate
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="c"&gt;# Simple merge gate&lt;/span&gt;
npm &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"BLOCKED: tests failed"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
gemini &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"Review for security bugs: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;git diff main&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; review.md
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-qi&lt;/span&gt; &lt;span class="s2"&gt;"critical"&lt;/span&gt; review.md&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"BLOCKED: critical security findings"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi
&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"MERGE GATE: OPEN"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Never Skip the Gate
&lt;/h3&gt;

&lt;p&gt;The moment you merge without running the gate "just this once," the system breaks down.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Log Everything
&lt;/h3&gt;

&lt;p&gt;Keep a DECISIONS.md. When you — or the agents — revisit a decision in week 12, the context is right there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tools don't matter. The pattern does:&lt;/strong&gt; one agent builds, another audits, nothing ships without both signing off, and a human makes the final call.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next: 22 Weeks, 100+ Tasks, Building in Public
&lt;/h2&gt;

&lt;p&gt;We're in Phase 1 of a 7-phase, 22-week build. 100+ tasks across backend, agent engine, frontend, integrations, and security. Every task follows the same lifecycle: claim, implement, cross-review, merge gate, approve.&lt;/p&gt;

&lt;p&gt;The next 21 weeks will test whether this scales. Phase 2 (autonomous agent execution loops)? Phase 5 (SEO tools with real APIs)? Phase 7 (production Terraform deployment)?&lt;/p&gt;

&lt;p&gt;The future of building software isn't about replacing developers. It's about giving one developer the leverage of an entire team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solo doesn't mean alone anymore.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Further Reading
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://code.claude.com/docs/en/sub-agents" rel="noopener noreferrer"&gt;Claude Code Subagents — Official Docs&lt;/a&gt; — How to create custom subagents with prompts, tool restrictions, and permissions&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://resources.anthropic.com/hubfs/2026%20Agentic%20Coding%20Trends%20Report.pdf" rel="noopener noreferrer"&gt;Anthropic's 2026 Agentic Coding Trends Report&lt;/a&gt; — 95% of developers using AI weekly, multi-agent adoption data&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://alexop.dev/posts/understanding-claude-code-full-stack/" rel="noopener noreferrer"&gt;Understanding Claude Code's Full Stack&lt;/a&gt; — The best technical breakdown of MCP, Skills, Hooks, and Subagents&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://code.visualstudio.com/blogs/2026/02/05/multi-agent-development" rel="noopener noreferrer"&gt;VS Code Multi-Agent Development&lt;/a&gt; — Microsoft's take on multi-agent coding environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Follow the build on &lt;a href="https://linkedin.com/in/axits-lab" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. Read more about our setup: &lt;a href="https://aumiqx.com/learn/claude-code-skills-mcp-guide/" rel="noopener noreferrer"&gt;Claude Code skills + MCP guide&lt;/a&gt;, &lt;a href="https://aumiqx.com/learn/sparc-swarm-development-guide/" rel="noopener noreferrer"&gt;SPARC + Swarm methodology&lt;/a&gt;, or &lt;a href="https://aumiqx.com/learn/how-we-built-aumiqx/" rel="noopener noreferrer"&gt;how we built aumiqx.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://aumiqx.com/learn/claude-code-agents-dev-team/" rel="noopener noreferrer"&gt;aumiqx.com&lt;/a&gt;. Follow the build on &lt;a href="https://linkedin.com/in/axitchaudhary" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>gemini</category>
      <category>aiagents</category>
      <category>multiagent</category>
    </item>
    <item>
      <title>Our Claude Code Setup: 30 Skills, MCPs, and Self-Learning Hooks</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Wed, 25 Mar 2026 23:39:58 +0000</pubDate>
      <link>https://dev.to/axitslab/our-claude-code-setup-30-skills-mcps-and-self-learning-hooks-5gje</link>
      <guid>https://dev.to/axitslab/our-claude-code-setup-30-skills-mcps-and-self-learning-hooks-5gje</guid>
      <description>&lt;h2&gt;
  
  
  Why Default Claude Code Isn't Enough
&lt;/h2&gt;

&lt;p&gt;Out of the box, Claude Code is powerful. But it's general-purpose. It doesn't know your project structure, your deployment process, your code patterns, or your team's preferences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skills fix this.&lt;/strong&gt; They're YAML-configured behaviors that extend Claude Code with domain-specific knowledge. Think of them as plugins that teach Claude how &lt;em&gt;your&lt;/em&gt; team works.&lt;/p&gt;

&lt;p&gt;We have 30 skills installed. That might sound like overkill — it's not. Each one handles a specific domain, which means Claude makes better decisions faster because it has the right context for the task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our 30-Skill Library: The Full Tour
&lt;/h2&gt;

&lt;p&gt;Here's every skill we use, organized by domain:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;th&gt;Skills&lt;/th&gt;
&lt;th&gt;What They Do&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AgentDB (5)&lt;/td&gt;
&lt;td&gt;advanced, learning, memory-patterns, optimization, vector-search&lt;/td&gt;
&lt;td&gt;Persistent memory, pattern learning, semantic search across agent sessions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub (5)&lt;/td&gt;
&lt;td&gt;code-review, multi-repo, project-management, release-management, workflow-automation&lt;/td&gt;
&lt;td&gt;Automated PRs, code reviews, CI/CD pipeline management&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SPARC&lt;/td&gt;
&lt;td&gt;sparc-methodology&lt;/td&gt;
&lt;td&gt;Structured development: Specification → Pseudocode → Architecture → Refinement → Completion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Swarm (2)&lt;/td&gt;
&lt;td&gt;swarm-orchestration, swarm-advanced&lt;/td&gt;
&lt;td&gt;Multi-agent parallel execution with coordination&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;V3 Architecture (8)&lt;/td&gt;
&lt;td&gt;core-implementation, DDD-architecture, memory-unification, performance-optimization, security-overhaul, CLI-modernization, MCP-optimization, swarm-coordination&lt;/td&gt;
&lt;td&gt;Domain-driven design, performance tuning, security patterns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ReasoningBank (2)&lt;/td&gt;
&lt;td&gt;agentdb-integration, intelligence&lt;/td&gt;
&lt;td&gt;Adaptive learning from past decisions, pattern recognition&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Browser&lt;/td&gt;
&lt;td&gt;browser&lt;/td&gt;
&lt;td&gt;Web automation, testing, data collection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pair Programming&lt;/td&gt;
&lt;td&gt;pair-programming&lt;/td&gt;
&lt;td&gt;Driver/navigator modes for collaborative AI coding&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Quality&lt;/td&gt;
&lt;td&gt;verification-quality&lt;/td&gt;
&lt;td&gt;Truth scoring, automatic rollback on quality failures&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hooks&lt;/td&gt;
&lt;td&gt;hooks-automation&lt;/td&gt;
&lt;td&gt;Pre/post task hooks, session management, learning integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Skill Builder&lt;/td&gt;
&lt;td&gt;skill-builder&lt;/td&gt;
&lt;td&gt;Meta-skill: creates new skills from patterns it observes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stream Chain&lt;/td&gt;
&lt;td&gt;stream-chain&lt;/td&gt;
&lt;td&gt;Multi-agent pipelines, data transformation, sequential workflows&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The skill that surprises people most: &lt;strong&gt;skill-builder&lt;/strong&gt;. It's a meta-skill that can create new skills. When Claude notices a repeated pattern in our development process, it can generate a new skill to handle that pattern automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  SuperClaude: 30+ Commands for Everything
&lt;/h2&gt;

&lt;p&gt;On top of skills, we run the SuperClaude framework — a set of slash commands that activate specialized behaviors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;/sc:analyze&lt;/strong&gt; — Deep code analysis across quality, security, performance, architecture&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/sc:implement&lt;/strong&gt; — Feature implementation with intelligent persona activation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/sc:brainstorm&lt;/strong&gt; — Requirements discovery via Socratic dialogue&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/sc:design&lt;/strong&gt; — System architecture and API design&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/sc:test&lt;/strong&gt; — Testing with coverage analysis and automated reporting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/sc:workflow&lt;/strong&gt; — Generate implementation workflows from PRDs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/sc:troubleshoot&lt;/strong&gt; — Issue diagnosis and resolution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/sc:git&lt;/strong&gt; — Git operations with smart commit messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/sc:pm&lt;/strong&gt; — Project manager agent for orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The power is in chaining them. A typical feature build looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/sc:brainstorm "new learn section for educational guides"
→ /sc:design (architecture from brainstorm output)
→ /sc:implement (code from design)
→ /sc:test (validate implementation)
→ /sc:analyze --focus security (security review)
→ /pr (commit, push, create PR)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each command activates specific personas and tools. /sc:analyze might activate security, performance, and architecture personas simultaneously, each providing domain-specific feedback.&lt;/p&gt;

&lt;h2&gt;
  
  
  claude-flow MCP: The Backbone
&lt;/h2&gt;

&lt;p&gt;Everything runs through one MCP server: &lt;strong&gt;claude-flow&lt;/strong&gt;. Here's our config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"claude-flow"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@claude-flow/cli@latest"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CLAUDE_FLOW_MODE"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"v3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CLAUDE_FLOW_HOOKS_ENABLED"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CLAUDE_FLOW_TOPOLOGY"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hierarchical-mesh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CLAUDE_FLOW_MAX_AGENTS"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"15"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CLAUDE_FLOW_MEMORY_BACKEND"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hybrid"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key configuration choices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;hierarchical-mesh topology&lt;/strong&gt; — Agents have a coordinator but can also communicate peer-to-peer. Best of both worlds: structure without bottlenecks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hybrid memory&lt;/strong&gt; — Combines fast in-memory storage with persistent disk-backed memory. Agents remember patterns across sessions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;15 max agents&lt;/strong&gt; — Our tested sweet spot. Beyond this, coordination overhead exceeds the parallelism benefit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hooks enabled&lt;/strong&gt; — Pre-task and post-task hooks that log patterns, validate outputs, and trigger learning.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Self-Learning System
&lt;/h2&gt;

&lt;p&gt;This is the part that excites me most. Our hooks system doesn't just automate — it &lt;strong&gt;learns&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We have 40+ helper scripts that run at various points in the development cycle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;intelligence.cjs&lt;/strong&gt; — Tracks patterns in tool usage, code changes, and outcomes. Learns which approaches work for which types of tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;learning-optimizer.sh&lt;/strong&gt; — Adjusts model routing based on task complexity. Simple tasks get routed to faster models; complex tasks to Opus.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pattern-consolidator.sh&lt;/strong&gt; — Periodically consolidates learned patterns, removing noise and strengthening reliable insights.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;security-scanner.sh&lt;/strong&gt; — Runs after every edit, catching vulnerabilities before they reach git.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;checkpoint-manager.sh&lt;/strong&gt; — Creates checkpoints during complex multi-agent tasks so we can rollback if something goes wrong.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result: the system today is measurably better at routing tasks, suggesting approaches, and catching issues than it was two weeks ago. It learns from every session.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The most powerful development tool isn't any single model or skill — it's the feedback loop between them. When your AI assistant learns from its own mistakes and successes, you stop repeating problems and start compounding insights.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How to Set This Up Yourself
&lt;/h2&gt;

&lt;p&gt;If you want a similar setup, here's the fastest path:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install claude-flow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp add claude-flow &lt;span class="nt"&gt;--&lt;/span&gt; npx &lt;span class="nt"&gt;-y&lt;/span&gt; @claude-flow/cli@latest mcp start
npx @claude-flow/cli@latest init &lt;span class="nt"&gt;--wizard&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Start with 3 skills, not 30
&lt;/h3&gt;

&lt;p&gt;Don't install everything at once. Start with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;sparc-methodology&lt;/strong&gt; — Gives structure to your development&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;swarm-orchestration&lt;/strong&gt; — Enables parallel agents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;verification-quality&lt;/strong&gt; — Catches quality issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Enable hooks
&lt;/h3&gt;

&lt;p&gt;The hooks are where the learning happens. Enable them in your MCP config and let them run for a few sessions before tuning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Add skills as you need them
&lt;/h3&gt;

&lt;p&gt;Hit a specific problem (need code review automation? Add github-code-review). Don't add skills speculatively.&lt;/p&gt;

&lt;p&gt;Total setup time: about 30 minutes. ROI starts from the first complex feature you build.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://aumiqx.com/learn/claude-code-skills-mcp-guide/" rel="noopener noreferrer"&gt;aumiqx.com&lt;/a&gt;. Follow the build on &lt;a href="https://linkedin.com/in/axitchaudhary" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>mcp</category>
      <category>aitools</category>
      <category>developertools</category>
    </item>
    <item>
      <title>Meet Buddy: Open Source AI Meeting Assistant That Doesn't Record Your Audio</title>
      <dc:creator>Axit</dc:creator>
      <pubDate>Wed, 25 Mar 2026 23:39:54 +0000</pubDate>
      <link>https://dev.to/axitslab/meet-buddy-open-source-ai-meeting-assistant-that-doesnt-record-your-audio-10f0</link>
      <guid>https://dev.to/axitslab/meet-buddy-open-source-ai-meeting-assistant-that-doesnt-record-your-audio-10f0</guid>
      <description>&lt;h2&gt;
  
  
  Why We Built Another AI Meeting Assistant (And Why It's Different)
&lt;/h2&gt;

&lt;p&gt;There are dozens of AI meeting assistants in 2026 — &lt;strong&gt;Otter.ai&lt;/strong&gt;, &lt;strong&gt;Fireflies.ai&lt;/strong&gt;, &lt;strong&gt;Fathom&lt;/strong&gt;, &lt;strong&gt;tl;dv&lt;/strong&gt;, &lt;strong&gt;Tactiq&lt;/strong&gt;, &lt;strong&gt;Final Round AI&lt;/strong&gt;. They all do roughly the same thing: join your call as a bot, record audio, transcribe it with Whisper or a similar model, and give you a summary.&lt;/p&gt;

&lt;p&gt;They work. But they have three problems that developers specifically hate:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;They record audio.&lt;/strong&gt; That's a privacy issue in regulated industries, with cautious clients, and in cultures where recording consent is nuanced. India, where we're based, runs largely on trust-based business relationships — dropping a recording bot into a call changes the dynamic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;They live in their own SaaS dashboard.&lt;/strong&gt; Your meeting notes are in Otter's cloud. Your code is in VS Code. Your tasks are in Linear or GitHub Issues. Three separate places for information that should flow together.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;They don't connect to your development workflow.&lt;/strong&gt; You get a transcript. Then you manually extract action items. Then you manually create tickets. Then you manually search your codebase for relevant code. Every "manually" is a leak.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We wanted an &lt;strong&gt;open source AI meeting assistant&lt;/strong&gt; that solves all three. No audio recording. Data goes to GitHub. And it feeds directly into Claude Code — the same tool we already use for development — through the &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol (MCP)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The result is &lt;strong&gt;Meet Buddy&lt;/strong&gt;: a Chrome extension that captures live Google Meet captions, pushes them to a GitHub repo you own, and exposes the data to Claude Code through an MCP server — where AI agents can analyze the transcript, search your codebase for solutions, and generate implementation plans while the meeting is still fresh.&lt;/p&gt;

&lt;p&gt;No API keys. No cloud transcription service. No monthly subscription. Just a Chrome extension, a Git repo, and the AI coding tool you already have open.&lt;/p&gt;

&lt;h2&gt;
  
  
  Meet Buddy vs Otter.ai vs Fireflies vs Fathom: How It Compares
&lt;/h2&gt;

&lt;p&gt;Let's be honest about what Meet Buddy is and isn't. It's not a replacement for Otter.ai if you're a sales team that needs CRM integration and speaker analytics across 500 calls. It's built for &lt;strong&gt;developers and small teams&lt;/strong&gt; who want their meeting data in their existing workflow — not a separate dashboard.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Meet Buddy&lt;/th&gt;
&lt;th&gt;Otter.ai&lt;/th&gt;
&lt;th&gt;Fireflies&lt;/th&gt;
&lt;th&gt;Fathom&lt;/th&gt;
&lt;th&gt;Meetily&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Audio recording&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;No&lt;/strong&gt; (captions only)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes (local)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open source&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes (MIT)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data storage&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Your GitHub repo&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Otter cloud&lt;/td&gt;
&lt;td&gt;Fireflies cloud&lt;/td&gt;
&lt;td&gt;Fathom cloud&lt;/td&gt;
&lt;td&gt;Local disk&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Developer workflow&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;MCP + Claude Code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;API&lt;/td&gt;
&lt;td&gt;API&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI agent analysis&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5-agent swarm&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Built-in summary&lt;/td&gt;
&lt;td&gt;Built-in summary&lt;/td&gt;
&lt;td&gt;Built-in summary&lt;/td&gt;
&lt;td&gt;LLM summary&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monthly cost&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0 (self-hosted)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$8-40/mo&lt;/td&gt;
&lt;td&gt;$15-39/mo&lt;/td&gt;
&lt;td&gt;$29/mo (teams)&lt;/td&gt;
&lt;td&gt;$0 (self-hosted)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Works without internet&lt;/td&gt;
&lt;td&gt;No (needs GitHub)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google Meet support&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Zoom/Teams support&lt;/td&gt;
&lt;td&gt;Not yet&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The closest open source alternative is &lt;strong&gt;Meetily&lt;/strong&gt; (formerly meetily.ai) — a self-hosted meeting transcription tool using local Whisper models. It's excellent for privacy-first audio transcription. But it doesn't connect to your development workflow. Your transcript lives in Meetily's UI, not in your GitHub repo or Claude Code session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TranscripTonic&lt;/strong&gt; is another open source Chrome extension that captures Google Meet captions — similar to our approach. But it downloads transcripts as files. It doesn't push to GitHub, doesn't have an MCP server, and doesn't feed into AI agents.&lt;/p&gt;

&lt;p&gt;Meet Buddy's unique angle: &lt;strong&gt;the entire pipeline from meeting to code is automated&lt;/strong&gt;. Caption → GitHub → MCP → Claude Code → agent swarm → implementation plan. No copy-pasting. No switching tabs. No "let me check my meeting notes."&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Open Source Meeting Transcription Pipeline Works
&lt;/h2&gt;

&lt;p&gt;The architecture is deliberately simple. Data flows one direction — from Google Meet to your code editor — through four components, each independently useful:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Chrome Extension (Manifest V3)
&lt;/h3&gt;

&lt;p&gt;The extension runs a content script on &lt;code&gt;meet.google.com&lt;/code&gt; that polls the DOM every 500ms for caption elements. Google Meet renders captions as obfuscated &lt;code&gt;div&lt;/code&gt; elements — class names like &lt;code&gt;nMcdL&lt;/code&gt; and &lt;code&gt;ygicle&lt;/code&gt; that change periodically. The scraper uses these as primary selectors with a structural fallback that finds captions by screen position and DOM pattern (bottom 40% of viewport, contains avatar image + name span + text).&lt;/p&gt;

&lt;p&gt;Captions go through a &lt;strong&gt;4-second stabilization window&lt;/strong&gt; to deduplicate. Google Meet updates captions in-place as the speaker talks — "I think" becomes "I think we should" becomes "I think we should focus on the API." Without dedup, you'd get 7 entries for one sentence. With it, you get one clean line after the speaker pauses.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;UI junk filter&lt;/strong&gt; strips out toolbar labels that Google Meet renders as text in the same DOM region as captions: "frame_person", "more_vert", "backgrounds and effects" — caught by a blocklist and a camelCase heuristic.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. GitHub Sync (Contents API)
&lt;/h3&gt;

&lt;p&gt;The service worker batches caption chunks and pushes to your GitHub repo every 15 seconds via the Contents API. Each meeting gets its own folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;meetings/2026-03-19-client-call/
  ├── meta.json           (title, start/end time, word count)
  ├── transcript.md       (timestamped speaker + text)
  └── screenshots/
      ├── 001-architecture.jpg    (65% JPEG, ~70KB)
      └── 002-error-screen.jpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Authentication uses OAuth Device Flow — you enter a code on github.com once, and the extension has push access to all your repos. No PAT management, no per-org installation.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. MCP Server (TypeScript)
&lt;/h3&gt;

&lt;p&gt;A stdio-transport MCP server that exposes 7 tools to Claude Code: &lt;code&gt;meeting_list&lt;/code&gt;, &lt;code&gt;meeting_active&lt;/code&gt;, &lt;code&gt;meeting_transcript&lt;/code&gt;, &lt;code&gt;meeting_screenshots&lt;/code&gt;, &lt;code&gt;meeting_meta&lt;/code&gt;, &lt;code&gt;meeting_notes&lt;/code&gt;, &lt;code&gt;meeting_sync&lt;/code&gt;. The &lt;code&gt;meeting_sync&lt;/code&gt; tool does a sparse Git checkout — only the &lt;code&gt;meetings/&lt;/code&gt; folder, not your entire repo.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;meeting_screenshots&lt;/code&gt; tool returns base64 image data, so Claude Code can actually &lt;em&gt;view&lt;/em&gt; the screenshots in the conversation. During our test, Claude described a screenshot showing "Earth Clique's avatar, captions at the bottom, Meet Buddy overlay showing Recording + 62 words" — it was reading the JPEG we'd just captured.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Agent Swarm (claude-flow)
&lt;/h3&gt;

&lt;p&gt;Five specialized agents coordinated through shared memory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Watcher&lt;/strong&gt; — polls the GitHub repo, stores new transcript chunks in shared memory&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analyst&lt;/strong&gt; — extracts pain points, action items, emotional signals, unanswered questions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Reviewer&lt;/strong&gt; — maps identified problems to existing code in your project&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brainstormer&lt;/strong&gt; — generates feature ideas and creative solutions based on the discussion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Planner&lt;/strong&gt; — creates a prioritized implementation plan with specific file paths and estimated effort&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You come back from the meeting. You say "what did you find?" You get a full report. You say "implement." Claude starts coding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Didn't Use Any Transcription API — And Why That Matters
&lt;/h2&gt;

&lt;p&gt;Every meeting transcription tool in the market either records audio and runs it through Whisper/Deepgram/AssemblyAI, or uses a meeting bot service like Recall.ai. That means your audio goes to a server — theirs or yours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Meet Buddy doesn't touch audio at all.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We use Google Meet's own built-in caption feature. Google is already transcribing the audio for the captions you see on screen. We just read those captions from the DOM. No audio capture. No speech-to-text API. No server processing.&lt;/p&gt;

&lt;p&gt;This has real consequences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Privacy&lt;/strong&gt; — No audio recording means no recording consent issues. In India, where we operate, client calls often involve sensitive business discussions. Telling someone "we're recording this" changes the conversation. Reading captions doesn't.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt; — Zero API costs. Whisper API is $0.006/min, Deepgram is $0.0043/min. A 30-minute daily standup costs $4-5/month per meeting. We pay nothing because Google is doing the transcription anyway.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity&lt;/strong&gt; — No audio pipeline to maintain. No Whisper model to host. No WebSocket connections for streaming audio. Just a content script that reads the DOM.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt; — Captions appear in real-time. There's no "processing your recording" delay. The transcript is available the moment the words are spoken.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tradeoff: we depend on Google Meet's caption quality. It's good but not perfect — proper nouns get mangled, heavy accents reduce accuracy, and it doesn't handle code-switching (mixing Hindi and English in the same sentence) as well as dedicated multilingual models. Our test session in Hinglish produced readable but imperfect transcripts.&lt;/p&gt;

&lt;p&gt;For developer meetings where the goal is capturing requirements, decisions, and action items — not legal-grade transcription — it's more than good enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Claude Code Advantage: Your AI Already Knows Your Codebase
&lt;/h2&gt;

&lt;p&gt;Here's the insight that makes Meet Buddy different from every other meeting tool: &lt;strong&gt;Claude Code already has your project context.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you're working in Claude Code, it knows your file structure, your tech stack, your recent changes, your CLAUDE.md instructions. When a client says "the checkout page is broken on mobile," Claude Code can immediately search your codebase for the checkout component, check recent git changes, and draft a fix.&lt;/p&gt;

&lt;p&gt;Traditional meeting tools give you a transcript. Then you switch to your IDE. Then you search for relevant code. Then you write tasks. Every step is manual.&lt;/p&gt;

&lt;p&gt;With Meet Buddy + MCP, the flow is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Client mentions a problem on the call&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Meet Buddy captures it as text, pushes to GitHub&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MCP server makes it available to Claude Code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Claude Code — which is already open with your project loaded — can immediately search your codebase for the relevant code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The agent swarm generates an implementation plan with specific file paths and line numbers&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No context switching. No "let me find that in the codebase." The meeting flows directly into development because the same AI tool handles both.&lt;/p&gt;

&lt;p&gt;This is only possible because we built Meet Buddy as an &lt;strong&gt;MCP server&lt;/strong&gt;, not a standalone SaaS app. The Model Context Protocol lets Claude Code call &lt;code&gt;meeting_transcript&lt;/code&gt; the same way it calls &lt;code&gt;git log&lt;/code&gt; or reads a file — it's a native part of the development workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  We Built It and Tested It in the Same Session — Here's What Actually Happened
&lt;/h2&gt;

&lt;p&gt;Here's the thing about build stories — most of them are written after the fact, cleaned up, and made to sound smooth. This one happened in real-time, with real bugs, real frustration, and real conversations captured by the tool itself.&lt;/p&gt;

&lt;p&gt;We built Meet Buddy in a single Claude Code session — about 3 hours from idea to working prototype. Then, without closing the session, we jumped on a Google Meet call with a friend (Earth Clique) to test it live.&lt;/p&gt;

&lt;p&gt;The first thing that happened? &lt;strong&gt;The extension showed 0 words.&lt;/strong&gt; The caption scraper couldn't find Google Meet's caption container. We opened Chrome DevTools mid-call, inspected the DOM, found the exact class names (&lt;code&gt;nMcdL&lt;/code&gt;, &lt;code&gt;NWpY1d&lt;/code&gt;, &lt;code&gt;ygicle&lt;/code&gt;), updated the scraper code, reloaded the extension — all while still on the call. That's what building with Claude Code looks like: you don't stop the conversation, you fix the code in parallel.&lt;/p&gt;

&lt;p&gt;The second thing: &lt;strong&gt;deduplication was completely broken.&lt;/strong&gt; Google Meet updates captions word by word. Our v1 scraper captured every intermediate state. One sentence — "Hello, can you hear me?" — generated 8 transcript entries. We rewrote the dedup logic three times during the call. Third time worked.&lt;/p&gt;

&lt;p&gt;Then the auth issue. We'd set up a GitHub App for authentication. It worked — but only showed repos where the app was installed. "Bro, why aren't my aumiqx repos showing here?" We switched to an OAuth App mid-call. All repos appeared. Lesson: GitHub Apps are for specific installations; OAuth Apps are for user-level access across all repos.&lt;/p&gt;

&lt;p&gt;After fixing all three issues, the word count started climbing: 90 words... 210... 500... 704 words and 1 screenshot by the 5-minute mark. Data was flowing from Google Meet → Chrome Extension → GitHub → and Claude Code could read it through the MCP server.&lt;/p&gt;

&lt;p&gt;Then the meta moment happened. We asked Claude to analyze the transcript mid-call. Claude reported: &lt;em&gt;"Earth Clique is talking about a PHP error — request too large, max 20 MB."&lt;/em&gt; It was right. Earth Clique had mentioned a PHP upload limit on another project. Claude caught it from the live transcript while we were still talking.&lt;/p&gt;

&lt;p&gt;We read Claude's analysis back to Earth Clique. Google Meet transcribed us reading the analysis. The transcription got pushed to GitHub. Claude analyzed its own analysis being discussed. Earth Clique's response to this recursion:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Apni poochh, khud hi khaega kyon?" — Will it eat its own tail?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That became the project's unofficial tagline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The results from our live test:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Session 1 (24 min)&lt;/th&gt;
&lt;th&gt;Session 2 (8 min)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Words captured&lt;/td&gt;
&lt;td&gt;1,415&lt;/td&gt;
&lt;td&gt;906&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Screenshots&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Transcript lines&lt;/td&gt;
&lt;td&gt;36&lt;/td&gt;
&lt;td&gt;35&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub pushes&lt;/td&gt;
&lt;td&gt;~15&lt;/td&gt;
&lt;td&gt;~8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent reports generated&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;We fixed three major bugs during the live call: the caption scraper not finding Google Meet's DOM elements (solved by inspecting the DOM during the call), the word-by-word deduplication fiasco (rewrote the stabilization logic three times), and the GitHub App auth issue (switched to OAuth mid-call).&lt;/p&gt;

&lt;p&gt;The recursion got real. We read Claude's analysis back to our friend on the call. Google Meet transcribed us reading Claude's analysis. That transcription got pushed to GitHub. Claude then analyzed its own analysis being discussed.&lt;/p&gt;

&lt;p&gt;Our friend's response to this recursion:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Apni poochh, khud hi khaega kyon?" — Will it eat its own tail?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That became the project's unofficial tagline. The tool that tests itself, analyzes itself, and improves itself through its own pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Didn't Work and What We're Fixing
&lt;/h2&gt;

&lt;p&gt;We're not going to pretend it was flawless. Here's what broke:&lt;/p&gt;

&lt;h3&gt;
  
  
  Long-speech deduplication
&lt;/h3&gt;

&lt;p&gt;Google Meet keeps one caption block for continuous speech and appends to it. A 30-second monologue generates one growing block that gets captured at every flush. The 4-second stabilization window helps for sentence-level speech, but long, uninterrupted talking still produces partial duplicates. We're adding a minimum delta threshold and smarter prefix diffing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Buffer loss on session end
&lt;/h3&gt;

&lt;p&gt;The caption buffer flushes every 15 seconds. If you click "End Session" 5 seconds after the last flush, those 5 seconds of captions are lost. We've added a forced flush on session end with a 2-second delay before finalizing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-time latency
&lt;/h3&gt;

&lt;p&gt;Google Meet captions update in milliseconds. Our pipeline has inherent latency: 4s stabilization + 15s buffer + GitHub API roundtrip + git pull on the Claude Code side. Total: ~20-30 seconds from spoken word to Claude seeing it. Fine for post-meeting analysis. Not fine for live-during-the-call AI suggestions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Agent coordination
&lt;/h3&gt;

&lt;p&gt;Claude Code sessions are built for human interaction, not autonomous agent orchestration. Our agents ran as background tasks that completed and stopped instead of looping for the full meeting duration. The fix: event-driven architecture where agents spawn on-demand when new data arrives, rather than trying to keep them alive for 30 minutes.&lt;/p&gt;

&lt;p&gt;The honest summary: Meet Buddy v1 is a working prototype that proves the concept. It captured 2,300+ words across two sessions, generated a 466-line implementation plan, and identified every pain point discussed — all without recording a single second of audio. The infrastructure needs work. The pipeline is solid.&lt;/p&gt;

&lt;p&gt;During the call, I told Claude straight up: &lt;em&gt;"Claude sessions are for humans, not for automations, but we are doing it just for the jugaad purposes."&lt;/em&gt; ("Jugaad" is a Hindi word for a clever hack.) Claude agreed. The response was honest: &lt;em&gt;"I was reactive when I should have been proactive. You had to keep prompting me to check things, spawn agents, fix issues. The whole point of this tool is that I work autonomously while you're on the call — and instead you were babysitting me AND doing the call."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That kind of self-awareness from an AI is why we use Claude Code. It doesn't pretend things worked when they didn't. It identifies its own failures and proposes fixes. That's a real collaboration, not a prompt-and-pray workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Set Up Meet Buddy (5 Minutes)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Clone and load the Chrome extension
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/aumiqx/meet-buddy.git
&lt;span class="nb"&gt;cd &lt;/span&gt;meet-buddy/extension/icons &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; bash generate-icons.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open Chrome → &lt;code&gt;chrome://extensions&lt;/code&gt; → Developer mode → Load unpacked → select the &lt;code&gt;extension/&lt;/code&gt; folder.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create a GitHub OAuth App
&lt;/h3&gt;

&lt;p&gt;Go to &lt;a href="https://github.com/settings/developers" rel="noopener noreferrer"&gt;github.com/settings/developers&lt;/a&gt; → OAuth Apps → New. Name it "Meet Buddy", set homepage to your URL, callback to &lt;code&gt;https://github.com&lt;/code&gt;, and check &lt;strong&gt;Enable Device Flow&lt;/strong&gt;. Copy the Client ID.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Authenticate
&lt;/h3&gt;

&lt;p&gt;Click the Meet Buddy icon in Chrome → paste the Client ID → click Authenticate → enter the device code on GitHub. Done — the extension now has push access to all your repos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Build and connect the MCP server
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;cd meet-buddy/mcp-server &amp;amp;&amp;amp; npm install &amp;amp;&amp;amp; npm run build&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add to your Claude Code &lt;code&gt;.mcp.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"meet-buddy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/path/to/meet-buddy/mcp-server/dist/index.js"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Use it
&lt;/h3&gt;

&lt;p&gt;Join a Google Meet call → enable captions (CC button or press &lt;code&gt;c&lt;/code&gt;) → click Meet Buddy → select your repo → Start Session. When the call ends, ask Claude Code: &lt;em&gt;"Sync and analyze the latest meeting."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The MCP server will pull the transcript, Claude will read it, and you'll get a structured analysis — pain points, action items, and code-mapped solutions — without having recorded a single second of audio.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Source Roadmap: What's Coming in v2 and v3
&lt;/h2&gt;

&lt;p&gt;Meet Buddy is MIT licensed and open source. Here's what's planned:&lt;/p&gt;

&lt;h3&gt;
  
  
  v2 — Real-Time Infrastructure (Q2 2026)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;WebSocket/SSE transport&lt;/strong&gt; replacing git polling for sub-second transcript delivery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Browser-based agent dashboard&lt;/strong&gt; — monitor all running agents, restart dead ones, send commands without leaving the meeting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;chokidar filesystem watcher&lt;/strong&gt; in the MCP server for near-real-time updates to Claude Code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-start recording&lt;/strong&gt; when joining a Google Meet call (opt-in setting)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Immediate buffer flush&lt;/strong&gt; on session end — no more lost captions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  v3 — Platform Expansion (Q3-Q4 2026)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zoom and Microsoft Teams support&lt;/strong&gt; — different DOM structure, same MCP pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speaker diarization&lt;/strong&gt; with time tracking and talk-time analytics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-action items&lt;/strong&gt; → individual GitHub Issues created from sentences starting with "we should", "let's", "can you"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Canvas-based screenshot annotation&lt;/strong&gt; — draw arrows, highlight regions, add labels before capture&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Offline-first mode&lt;/strong&gt; — buffer everything locally, sync to GitHub when connectivity returns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core extension + MCP server will always be free and open source. We may offer a managed version with the agent dashboard, team analytics, and enterprise integrations — but the pipeline from meeting to code will never be paywalled.&lt;/p&gt;

&lt;p&gt;Meet Buddy is MIT licensed. &lt;a href="https://github.com/aumiqx/meet-buddy" rel="noopener noreferrer"&gt;Star it, fork it, break it, fix it.&lt;/a&gt; Contributions welcome — especially if you know Zoom or Teams' DOM structure.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://aumiqx.com/learn/meet-buddy-real-time-meeting-copilot/" rel="noopener noreferrer"&gt;aumiqx.com&lt;/a&gt;. Follow the build on &lt;a href="https://linkedin.com/in/axitchaudhary" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>meetbuddy</category>
      <category>aimeetingassistant</category>
      <category>opensourcemeetingtranscription</category>
      <category>googlemeetchromeextension</category>
    </item>
  </channel>
</rss>
