<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Achintya Sharma</title>
    <description>The latest articles on DEV Community by Achintya Sharma (@sharmaachintya).</description>
    <link>https://dev.to/sharmaachintya</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sharmaachintya"/>
    <language>en</language>
    <item>
      <title>Building EyeGuide: A Real-Time AI Visual Companion for the Blind with Gemini Live API</title>
      <dc:creator>Achintya Sharma</dc:creator>
      <pubDate>Fri, 13 Mar 2026 17:14:35 +0000</pubDate>
      <link>https://dev.to/sharmaachintya/building-eyeguide-a-real-time-ai-visual-companion-for-the-blind-with-gemini-live-api-4ab7</link>
      <guid>https://dev.to/sharmaachintya/building-eyeguide-a-real-time-ai-visual-companion-for-the-blind-with-gemini-live-api-4ab7</guid>
      <description>&lt;p&gt;&lt;em&gt;This blog post was created for the purposes of entering the Gemini Live Agent Challenge hackathon. #GeminiLiveAgentChallenge&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;285 million people worldwide are visually impaired. While screen readers help with digital interfaces, nothing helps them "see" the physical world around them. I wanted to change that.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: EyeGuide
&lt;/h2&gt;

&lt;p&gt;EyeGuide is a real-time AI companion that sees through a phone camera and talks naturally with the user. It describes surroundings, reads text, warns about hazards, and can be interrupted mid-sentence — just like talking to a friend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 2.5 Flash Native Audio&lt;/strong&gt; — for real-time bidirectional audio + vision&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google ADK (Agent Development Kit)&lt;/strong&gt; — bidi-streaming runtime with LiveRequestQueue&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FastAPI + WebSocket&lt;/strong&gt; — real-time server communication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google Cloud Run&lt;/strong&gt; — serverless deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google Cloud Firestore&lt;/strong&gt; — user preferences&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;The core challenge was connecting the browser's camera and microphone to Google's Gemini Live API in real-time. Here's the flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Browser captures audio (16kHz PCM) and camera frames (1 FPS JPEG)&lt;/li&gt;
&lt;li&gt;WebSocket sends both to a FastAPI backend on Cloud Run&lt;/li&gt;
&lt;li&gt;ADK's &lt;code&gt;LiveRequestQueue&lt;/code&gt; feeds data to Gemini's Live API&lt;/li&gt;
&lt;li&gt;Gemini processes audio + video simultaneously and responds with voice&lt;/li&gt;
&lt;li&gt;Audio response streams back through WebSocket to the browser&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The most impressive part? &lt;strong&gt;Barge-in support&lt;/strong&gt; — users can interrupt the AI mid-sentence, and Gemini's built-in Voice Activity Detection handles it seamlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learnings
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Only &lt;code&gt;gemini-2.5-flash-native-audio-latest&lt;/code&gt; supports &lt;code&gt;bidiGenerateContent&lt;/code&gt; on the Google AI API&lt;/li&gt;
&lt;li&gt;ADK's &lt;code&gt;RunConfig&lt;/code&gt; with &lt;code&gt;StreamingMode.BIDI&lt;/code&gt; is the correct way to configure live streaming&lt;/li&gt;
&lt;li&gt;1 FPS video is surprisingly sufficient for scene understanding&lt;/li&gt;
&lt;li&gt;System prompt engineering makes or breaks a voice agent's persona&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live App&lt;/strong&gt;: &lt;a href="https://eyeguide-966189115030.us-central1.run.app/" rel="noopener noreferrer"&gt;https://eyeguide-966189115030.us-central1.run.app/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/sharmaachintya/EyeGuide" rel="noopener noreferrer"&gt;https://github.com/sharmaachintya/EyeGuide&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Created for the Gemini Live Agent Challenge hackathon. #GeminiLiveAgentChallenge&lt;/em&gt;&lt;/p&gt;

</description>
      <category>a11y</category>
      <category>devchallenge</category>
      <category>gemini</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
