<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: prats-2311</title>
    <description>The latest articles on DEV Community by prats-2311 (@prats2311).</description>
    <link>https://dev.to/prats2311</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prats2311"/>
    <language>en</language>
    <item>
      <title>How I Built a Real-Time ASL Tutor That Sees Your Hands — Using Gemini Live API</title>
      <dc:creator>prats-2311</dc:creator>
      <pubDate>Mon, 16 Mar 2026 18:41:46 +0000</pubDate>
      <link>https://dev.to/prats2311/how-i-built-a-real-time-asl-tutor-that-sees-your-hands-using-gemini-live-api-504a</link>
      <guid>https://dev.to/prats2311/how-i-built-a-real-time-asl-tutor-that-sees-your-hands-using-gemini-live-api-504a</guid>
      <description>&lt;p&gt;I watched CODA last year. If you haven't seen it — it's about a hearing girl raised by deaf parents, and her journey navigating two worlds. I walked away from that film with one question stuck in my head:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is there a Duolingo for French, Spanish, and Japanese — but nothing that can actually &lt;em&gt;watch&lt;/em&gt; you sign, and tell you if you got it right?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not a video library. Not a chatbot. Something that sees your hands.&lt;/p&gt;

&lt;p&gt;I spent the last month finding out if that was possible. The result is &lt;strong&gt;SignSensei&lt;/strong&gt; — a real-time ASL tutor powered by Google Gemini Live 2.5 Flash.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;I created this project for the purposes of entering the Gemini Live Agent Challenge hackathon. #GeminiLiveAgentChallenge&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Problem With Every Existing ASL Tool
&lt;/h2&gt;

&lt;p&gt;Every ASL learning app I found does one of two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Shows you a video of someone signing&lt;/li&gt;
&lt;li&gt;Shows you a diagram of hand positions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Neither one can tell you if &lt;strong&gt;your&lt;/strong&gt; hands are right. They're passive. You watch, you guess, you hope. There's no feedback loop.&lt;/p&gt;

&lt;p&gt;This isn't a product gap — it's a technical gap. Until recently, no AI could handle continuous, real-time vision AND bidirectional voice simultaneously, with low enough latency for a natural tutoring conversation.&lt;/p&gt;

&lt;p&gt;Then Gemini Live launched.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Gemini Live Changes Everything
&lt;/h2&gt;

&lt;p&gt;Gemini Live 2.5 Flash is the only model that can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Watch your camera &lt;strong&gt;continuously&lt;/strong&gt; (not one-shot images)&lt;/li&gt;
&lt;li&gt;Listen to your voice in real time&lt;/li&gt;
&lt;li&gt;Respond with natural spoken audio&lt;/li&gt;
&lt;li&gt;Handle interruptions&lt;/li&gt;
&lt;li&gt;Call tools to trigger structured actions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the specific combination SignSensei needed. Every other approach I considered had a fatal flaw:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Problem&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Vision API (batch)&lt;/td&gt;
&lt;td&gt;Too slow — 2-3 sec latency, no voice&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT Vision&lt;/td&gt;
&lt;td&gt;No real-time stream, no audio&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini Flash (non-Live)&lt;/td&gt;
&lt;td&gt;Turn-based, not conversational&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemini Live 2.5 Flash&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Real-time, bidirectional, vision + voice&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  The Architecture — What I Learned the Hard Way
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Direct WebSocket Connection (Ephemeral Token Pattern)
&lt;/h3&gt;

&lt;p&gt;The most important architectural decision: &lt;strong&gt;the browser connects directly to Vertex AI&lt;/strong&gt;, not through our backend.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// frontend/src/features/live-session/hooks/useGeminiLive.ts&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;baseUrl&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/api/token`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Backend mints token&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;WS_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`wss://us-central1-aiplatform.googleapis.com/ws/google.cloud.aiplatform.v1beta1.LlmBidiService/BidiGenerateContent?bearer_token=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ws&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;WebSocket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;WS_URL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Direct to Vertex AI&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The backend (FastAPI on Cloud Run) never sees the video or audio stream. It only mints a short-lived ephemeral token using Application Default Credentials. This keeps GCP credentials server-side while eliminating transcoding latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result: Sub-second feedback on hand grading.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  2. The Smart Standby Engine — Solving Hallucinations
&lt;/h3&gt;

&lt;p&gt;My first build sent camera frames continuously. The AI would grade random hand positions during idle moments. It was chaotic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix: 0 FPS standby, 5 FPS active.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// When AI is speaking instructions — camera off&lt;/span&gt;
&lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;videoCaptureRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isPracticeModeActive&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;videoCaptureRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setFrameRate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  &lt;span class="c1"&gt;// User is signing&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;videoCaptureRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setFrameRate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  &lt;span class="c1"&gt;// AI is talking&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;isPracticeModeActive&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The camera activates only when the user explicitly clicks "I'm Ready." The AI has a clean visual context and a specific task. Hallucinations dropped dramatically.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Context Window Hygiene — The Biggest Surprise
&lt;/h3&gt;

&lt;p&gt;My first architecture maintained one WebSocket connection for an entire lesson (6+ words). At word 3-4, the AI would start confusing prior grading context with the current evaluation. It would "remember" a previous incorrect sign and apply that memory to the current word.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The solution: Fresh connection per word.&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Session Type&lt;/th&gt;
&lt;th&gt;Connection Strategy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Individual word practice&lt;/td&gt;
&lt;td&gt;Fresh WebSocket, injected system prompt&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Retry after incorrect sign&lt;/td&gt;
&lt;td&gt;Same connection (AI needs correction memory)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Boss Stage (full sentence)&lt;/td&gt;
&lt;td&gt;Single persistent connection&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is the &lt;strong&gt;Context Window Hygiene&lt;/strong&gt; pattern. Each word starts with a clean prompt and zero contamination from prior rounds. The Boss Stage intentionally retains context because evaluating a sequence requires remembering each sign.&lt;/p&gt;

&lt;p&gt;The key insight: &lt;strong&gt;stateless context = predictable grading.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Deterministic Grading — No AI Hallucination of Progress
&lt;/h3&gt;

&lt;p&gt;The AI cannot advance the curriculum on its own. Every lesson step is gated by a &lt;strong&gt;tool call&lt;/strong&gt; that the frontend validates against Zustand state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Tool call from Gemini triggers curriculum advance&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mark_sign_correct&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;currentStore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useLessonStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getState&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;currentStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isGradingWindowActive&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Reject — grading window not open&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;currentStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;advanceToNextWord&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// Zustand updates truth&lt;/span&gt;
  &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Gemini must call &lt;code&gt;mark_sign_correct&lt;/code&gt; with a valid grading window open. The client rejects tool calls outside the expected window. This eliminates curriculum hallucinations where the AI might "pretend" the user passed a sign they didn't perform.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built — Full Feature List
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🎓 &lt;strong&gt;Live AI Tutoring&lt;/strong&gt; — Gemini watches your webcam and grades signs in real time&lt;/li&gt;
&lt;li&gt;🎤 &lt;strong&gt;Voice First&lt;/strong&gt; — AI speaks instructions, user signs, says "Done" to trigger grading&lt;/li&gt;
&lt;li&gt;🗺️ &lt;strong&gt;Saga Map&lt;/strong&gt; — Gamified curriculum with unlockable lesson nodes&lt;/li&gt;
&lt;li&gt;⚔️ &lt;strong&gt;Boss Stage&lt;/strong&gt; — Full sentence signing to complete each lesson&lt;/li&gt;
&lt;li&gt;✨ &lt;strong&gt;AI Deck Generator&lt;/strong&gt; — Type any topic, Gemini generates a custom ASL lesson&lt;/li&gt;
&lt;li&gt;🌐 &lt;strong&gt;Community Decks&lt;/strong&gt; — Publish decks publicly, browse by 8 categories&lt;/li&gt;
&lt;li&gt;👤 &lt;strong&gt;Anonymous Sessions&lt;/strong&gt; — No sign-up required, try instantly&lt;/li&gt;
&lt;li&gt;🎭 &lt;strong&gt;Mascot Emotion System&lt;/strong&gt; — 7-state mascot tied to grading outcomes (Rive animated)&lt;/li&gt;
&lt;li&gt;🎮 &lt;strong&gt;Gamification&lt;/strong&gt; — XP, streaks, stars, gems&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Google Cloud Stack
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Service&lt;/th&gt;
&lt;th&gt;Usage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Vertex AI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Gemini Live 2.5 Flash — BidiGenerateContent WebSocket&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Google GenAI SDK&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Gemini 2.0 Flash Lite — AI Deck Generation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cloud Run&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;FastAPI backend — ephemeral token minting&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Firebase Hosting&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;React/Vite frontend CDN&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Firestore&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;User profiles, XP/streak, deck library&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Firebase Auth&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Anonymous sessions + Google Sign-in&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Artifact Registry&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Docker image storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Secret Manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;API credentials (zero hardcoded keys)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Workload Identity Federation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Keyless CI/CD via GitHub Actions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Full IaC for Cloud Run, Artifact Registry, IAM&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;strong&gt;Live:&lt;/strong&gt; &lt;a href="https://signsensei.web.app" rel="noopener noreferrer"&gt;signsensei.web.app&lt;/a&gt; — no account required&lt;br&gt;&lt;br&gt;
📦 &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/prats-2311/SignSensei" rel="noopener noreferrer"&gt;github.com/prats-2311/SignSensei&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No sign-up. Open the app, click Get Started, and you're learning ASL in under 10 seconds.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Note to the Gemini Live API Team
&lt;/h2&gt;

&lt;p&gt;Gemini Live already has a camera. It already runs in millions of pockets. What SignSensei proves is what that platform becomes when you focus it — on a curriculum, on honest grading, on a learner who deserves a patient, always-available teacher.&lt;/p&gt;

&lt;p&gt;To the Google Gemini Live API team: you built something that didn't exist before. A model that sees, hears, and speaks simultaneously in real time. SignSensei is one answer to the question of what to do with that. There are 70 million deaf people and hundreds of millions who want to learn their language. &lt;/p&gt;

&lt;p&gt;The API is ready. The world is waiting.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with ❤️ for the Gemini Live Agent Challenge.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;#GeminiLiveAgentChallenge #GeminiLive #GoogleCloud #ASL #AI&lt;/em&gt;&lt;/p&gt;

</description>
      <category>gemini</category>
      <category>googlecloud</category>
      <category>asl</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
