<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ngo Quoc Huy</title>
    <description>The latest articles on DEV Community by Ngo Quoc Huy (@nqh).</description>
    <link>https://dev.to/nqh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nqh"/>
    <language>en</language>
    <item>
      <title>I Told Gemini Live to Be Funny. It Read My System Prompt Out Loud.</title>
      <dc:creator>Ngo Quoc Huy</dc:creator>
      <pubDate>Mon, 16 Mar 2026 22:48:25 +0000</pubDate>
      <link>https://dev.to/nqh/i-told-gemini-live-to-be-funny-it-audibly-recited-my-system-prompt-57ji</link>
      <guid>https://dev.to/nqh/i-told-gemini-live-to-be-funny-it-audibly-recited-my-system-prompt-57ji</guid>
      <description>&lt;p&gt;I told Gemini Live to be funny. It read my system prompt out loud. Not a summary. Not a paraphrase. The actual text I wrote -- hooks, facts, anchors -- delivered as dialogue, in character, with vocal inflection. Cleopatra was quoting my engineering notes to a student like they were her own thoughts.&lt;/p&gt;

&lt;p&gt;That was 3am on a Sunday in Santa Marta, Colombia. I was sitting on the floor. Laptop on a chair. I'd been awake for about 36 hours. I was picking ants out of my rice pot because I'd left it open. Two cups of coffee in. And the app I'd been building for three days straight was, at that exact moment, doing the one thing I never expected a voice model to do: performing my documentation.&lt;/p&gt;

&lt;p&gt;This is the story of building Past, Live -- an app where you call historical figures and they pick up. Four pivots. Three Gemini models. 48 hours awake. One pyramid scheme joke that made the whole thing worth it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Past, Live does
&lt;/h2&gt;

&lt;p&gt;Type any topic. "I wanna talk to one of the French astronauts." Flash finds a real person who lived it -- Jean-Loup Chrétien, first Western European in space -- generates their portrait and a scene from their era, and puts you on a live voice call. They pick up. They have opinions. They're funny. You can interrupt them mid-sentence. They remember you across calls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live&lt;/strong&gt;: &lt;a href="https://past-live.ngoquochuy.com" rel="noopener noreferrer"&gt;past-live.ngoquochuy.com&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Repo&lt;/strong&gt;: &lt;a href="https://github.com/nqh-packages/past-live" rel="noopener noreferrer"&gt;github.com/nqh-packages/past-live&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Demo&lt;/strong&gt;: &lt;a href="https://youtu.be/j6wccLHKbqk?si=_3nUE1RKmt8jWNXC" rel="noopener noreferrer"&gt;youtu.be/j6wccLHKbqk&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Why I built this
&lt;/h2&gt;

&lt;p&gt;My brother and I had this idea for a while -- a Duolingo-style app for learning anything, gamifying the experience. History was the first topic. When the Gemini Live Agent Challenge opened, I took that core and entered.&lt;/p&gt;

&lt;p&gt;I'm financially tight right now. I've been trying to make my other project profitable -- last month in Budapest I went to salons and convinced them to let me build their websites for free just to get first case studies. So this hackathon, if it could work out, it would mean a lot.&lt;/p&gt;

&lt;p&gt;But there's a personal use case too. I've been living in Budapest for 8 years. My siblings are half Hungarian. I want citizenship, which means studying Hungarian history, law, culture. This app doesn't limit to history -- if the model can learn it, the model can teach it. Feed it a citizenship questionnaire and a character who knows the material. Fun first. Knowledge second.&lt;/p&gt;


&lt;h2&gt;
  
  
  How I build things
&lt;/h2&gt;

&lt;p&gt;I'm a systems architect. I design the overall structure, write specifications, and direct AI coding agents. I don't write code line by line. With Gemini Live, there was no spec to write. The only way to learn was to break it. Four times.&lt;/p&gt;


&lt;h2&gt;
  
  
  Day 1: a quiz app that nobody wanted to use
&lt;/h2&gt;

&lt;p&gt;The first version was a gamified quiz. You roleplayed as the historical figure, picked the right options to advance. Think "you are Napoleon's advisor -- do you march on Moscow or retreat?" In text, it works. In voice mode, it was not fun at all. The soul wasn't there.&lt;/p&gt;

&lt;p&gt;Being a non-history nerd, it quickly hit me that: I don't have enough historical knowledge to make decisions as advisors. The pressure of deciding someone's fate when you don't know the context just creates anxiety. Imagine how younger users must feel using this app 🤡&lt;/p&gt;

&lt;p&gt;So I flipped it. You're not the advisor. You're calling someone who lived through it. You ask them about everything. They have the stress, the urgency, the emotion. You just listen, ask questions, and make choices when they present them.&lt;/p&gt;

&lt;p&gt;That pivot required only a prompt rewrite. Zero architecture changes. The schema stayed the same. But the experience was completely different.&lt;/p&gt;


&lt;h2&gt;
  
  
  The heaviness problem
&lt;/h2&gt;

&lt;p&gt;V2 worked -- too well. Gemini's native audio conveys emotion genuinely. &lt;code&gt;enableAffectiveDialog&lt;/code&gt; makes the model carry emotional weight through vocal tone. When Constantine XI picks up and his city is falling, you feel it. When Bolívar talks about crossing the Andes knowing half his men won't make it, the urgency is real.&lt;/p&gt;

&lt;p&gt;But every call left me feeling worse. The stress, the heaviness -- I hated using my own app. I needed the characters to be people you actually want to stay on the phone with. Not tutors. Not fact machines. Not AI assistants being helpful. People.&lt;/p&gt;


&lt;h2&gt;
  
  
  50+ phone calls in one night
&lt;/h2&gt;

&lt;p&gt;V3 was prompt surgery. Nine or ten variations. Each one needed at least five test calls to evaluate. That's 50+ phone calls in one night. I was bored. I was tired. I played games between test calls. I drank two cups of coffee. I was picking ants out of my rice pot because I'd left it open.&lt;/p&gt;

&lt;p&gt;Then I went to wash the dishes. And the idea hit.&lt;/p&gt;

&lt;p&gt;I was already using Gemini Flash to generate character metadata -- name, colors, historical setting. Why not have Flash write the entire personality prompt for Live too? So the voice, humor, quirks, all of it gets generated per person instead of hardcoded.&lt;/p&gt;

&lt;p&gt;This is the core architectural insight: &lt;strong&gt;Gemini Live's reasoning is limited. It can't build a character AND perform it at the same time.&lt;/strong&gt; Flash builds. Live performs.&lt;/p&gt;


&lt;h2&gt;
  
  
  Four failed scripts
&lt;/h2&gt;

&lt;p&gt;Before the bag-of-material architecture worked, I tried four ways to structure the conversation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;V1 -- Exact dialogue.&lt;/strong&gt; Student interrupted; model restarted identically three times then apologized. Dead on arrival.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;V2 -- Hints.&lt;/strong&gt; Gave the model "CONVEY: the strategic importance of the Bosphorus." It treated the list as a checklist, delivering all points in the first 30 seconds. No pacing, no conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;V3 -- Minimal hints + stop rule.&lt;/strong&gt; Mechanical beat-jumping. The model ignored what the student was saying and just moved to the next checkpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;V4 -- Just destinations.&lt;/strong&gt; "At some point, discuss the siege." Too vague. The model produced pleasant, vague, educational filler. No specificity, no surprise, no personality.&lt;/p&gt;


&lt;h2&gt;
  
  
  The bag of sticks
&lt;/h2&gt;

&lt;p&gt;What actually worked: pack the prompt with discrete pieces of material -- hooks, verified facts, surprising anchors, decision points, scene descriptions, closing lines -- and let the model pull from them based on where the conversation goes. No linear script. No acts. No checkpoints.&lt;/p&gt;

&lt;p&gt;I call it a "bag of sticks." Flash generates the bag. Live reaches in and grabs whatever fits the moment.&lt;/p&gt;

&lt;p&gt;The quality came from specific, casual historical facts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"I looked like a coin. Which honestly, for a queen, was more useful than looking like a person."&lt;/li&gt;
&lt;li&gt;"They dragged 72 ships over a mountain. Over. A. Mountain."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't in any textbook phrasing. Flash generates them with personality baked in. Live delivers them like they just occurred to the character mid-sentence.&lt;/p&gt;


&lt;h2&gt;
  
  
  The 3am breakthrough
&lt;/h2&gt;

&lt;p&gt;After all the script iterations, the characters were knowledgeable but flat. Informative but boring. I couldn't figure out why.&lt;/p&gt;

&lt;p&gt;Then I found it. A leftover directive from V2 -- the heavy, stressful version -- that said &lt;strong&gt;the model cannot make jokes&lt;/strong&gt;. Left over from when the characters were supposed to be experiencing crisis. I'd rewritten the entire prompt architecture around it and never removed that one line.&lt;/p&gt;

&lt;p&gt;I flipped it 180 degrees. The model should bounce energy back and forth with the student. If they joke, joke back. Push further. Be the funniest person at a dinner party who happens to have lived through something insane.&lt;/p&gt;

&lt;p&gt;That was the click.&lt;/p&gt;

&lt;p&gt;Four more rounds of tuning. I went to bed at 7:30am. Slept for 30 minutes. Got back up because I had 9 hours until the deadline and everything was still all over the place.&lt;/p&gt;


&lt;h2&gt;
  
  
  The pyramid scheme
&lt;/h2&gt;

&lt;p&gt;At some point during testing, I told Cleopatra I was her mother.&lt;/p&gt;

&lt;p&gt;She paused. Then: "Is this some kind of prank, or is it a pyramid scheme?"&lt;/p&gt;

&lt;p&gt;David and I laughed our asses off.&lt;/p&gt;

&lt;p&gt;The double meaning with pyramids. The timing. The delivery. None of that was prompted. The model invented it from the personality bag -- irreverent, strategic, mildly amused by everything. That's when I knew the architecture was right. You can't prompt humor directly. You give the model a personality, a bag of material, and rules for phone call pacing. Then you get out of the way.&lt;/p&gt;


&lt;h2&gt;
  
  
  30 voices, no documentation
&lt;/h2&gt;

&lt;p&gt;Gemini Live has 30 voices. There's basically no documentation on what they sound like. No samples. No descriptions. Nothing.&lt;/p&gt;

&lt;p&gt;I wrote a script that generates the same audio with each voice. Downloaded all 30. Then I uploaded the recordings to Gemini API and asked it to describe each speaker -- age, racial background, energy, personality. From that catalog, I picked 4 male and 4 female standouts so Flash can match any historical character to the right voice automatically.&lt;/p&gt;

&lt;p&gt;Enceladus got Bolívar. Aoede got Cleopatra. The voice catalog lives in &lt;code&gt;server/src/voice-catalog.ts&lt;/code&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  Three models, five calls
&lt;/h2&gt;

&lt;p&gt;Every session uses three Gemini models making five API calls:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;&lt;code&gt;gemini-3-flash-preview&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Topic → 3 figures, full story script, voice matching&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&lt;code&gt;gemini-3.1-flash-image-preview&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Scene art (16:9), pre-generated at preview&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;&lt;code&gt;gemini-3.1-flash-image-preview&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Character portrait, cached per character&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;&lt;code&gt;gemini-2.5-flash-native-audio-preview&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Live voice session with tool calling&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;&lt;code&gt;gemini-3-flash-preview&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Post-call summary, key facts, farewell&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Flash writes the personality. Image generates the visuals. Live performs the call. Each model does what it's best at.&lt;/p&gt;


&lt;h2&gt;
  
  
  The art direction
&lt;/h2&gt;

&lt;p&gt;I'm a designer turned system architect. Before writing any code I scaffolded design variations until something stood out.&lt;/p&gt;

&lt;p&gt;I've always loved the crosshatch engravings on paper money, especially USD. Every character portrait uses that style -- monochrome black and white on vibrant orange. The scene images use the same engraving with about 30% orange placed intentionally at the focal point of the scene.&lt;/p&gt;

&lt;p&gt;Gemini 3.1 Image is really good at deciding where that orange should go. I haven't found one occasion where I wasn't happy with the result.&lt;/p&gt;


&lt;h2&gt;
  
  
  Tool calling: less is more
&lt;/h2&gt;

&lt;p&gt;I started with &lt;code&gt;googleSearch&lt;/code&gt;, &lt;code&gt;announce_choice&lt;/code&gt;, &lt;code&gt;end_session&lt;/code&gt;, and &lt;code&gt;show_scene&lt;/code&gt;. Four tools.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;googleSearch&lt;/code&gt; crashed everything. GitHub issue &lt;a href="https://github.com/google-gemini/generative-ai-js/issues/843" rel="noopener noreferrer"&gt;#843&lt;/a&gt;, 43+ reactions, open since May 2025. Tool calling with native audio is fragile. The more tools, the more crashes.&lt;/p&gt;

&lt;p&gt;I removed &lt;code&gt;googleSearch&lt;/code&gt; and dropped to three tools, all marked &lt;code&gt;NON_BLOCKING&lt;/code&gt; so the model doesn't go silent mid-sentence waiting for a tool response.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;announce_choice&lt;/code&gt; -- presents 2-3 tappable decision cards&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;show_scene&lt;/code&gt; -- displays era-specific images inline&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;end_session&lt;/code&gt; -- graceful hangup with farewell&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  VAD: the invisible thing that makes or breaks it
&lt;/h2&gt;

&lt;p&gt;Voice Activity Detection determines when the model thinks you've stopped talking. Get it wrong and the model interrupts you constantly, or waits 3 seconds of silence before responding.&lt;/p&gt;

&lt;p&gt;The original config had start and end sensitivity inverted. Once I fixed it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;automaticActivityDetection&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;startOfSpeechSensitivity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;START_SENSITIVITY_LOW&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;endOfSpeechSensitivity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;END_SENSITIVITY_HIGH&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;prefixPaddingMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;silenceDurationMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Low start sensitivity = doesn't trigger on background noise. High end sensitivity = picks up quickly when you've stopped. 500ms silence = natural phone call pacing.&lt;/p&gt;

&lt;p&gt;Audio chunks: 512 samples at 16kHz = 32ms per chunk. Google recommends 20-40ms. I was originally sending 4096-sample chunks. The difference is night and day.&lt;/p&gt;




&lt;h2&gt;
  
  
  Re-anchoring: keeping characters in character
&lt;/h2&gt;

&lt;p&gt;Without intervention, Gemini Live drifts after 4-5 minutes. Characters start lecturing. They drop their personality. They forget they're on a phone call and start monologuing like a Wikipedia article.&lt;/p&gt;

&lt;p&gt;Every 4 model turns, I re-inject identity and behavioral anchors via &lt;code&gt;sendClientContent&lt;/code&gt; with &lt;code&gt;turnComplete: false&lt;/code&gt;. The &lt;code&gt;false&lt;/code&gt; is critical -- it prevents VAD from triggering, so there's no audio cutoff mid-re-anchor. The character doesn't know it happened.&lt;/p&gt;




&lt;h2&gt;
  
  
  Context window compression
&lt;/h2&gt;

&lt;p&gt;Audio consumes roughly 32 tokens per second. A 10-minute call = ~38,400 tokens. Without compression, quality degrades after 5-6 minutes as the context fills up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;contextWindowCompression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;slidingWindow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt;
  &lt;span class="nx"&gt;triggerTokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This enabled consistent 10-minute sessions without quality drift.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 40% crash rate
&lt;/h2&gt;

&lt;p&gt;Gemini Live crashes about 40% of the time. Not during quiet testing at 2am. During demo time -- around 6pm Colombian time, busy US hours.&lt;/p&gt;

&lt;p&gt;I built:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto-reconnect on 1011 errors (max 2 attempts)&lt;/li&gt;
&lt;li&gt;Full context replay on reconnect (complete transcript + tool call results re-injected)&lt;/li&gt;
&lt;li&gt;"Signal Lost" UI with retry/abort buttons&lt;/li&gt;
&lt;li&gt;Exponential backoff on initial connection&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GoAway&lt;/code&gt; signal handling for graceful server-initiated reconnection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The app doesn't pretend the API is stable. It builds around the instability.&lt;/p&gt;




&lt;h2&gt;
  
  
  System prompt architecture
&lt;/h2&gt;

&lt;p&gt;Google's guidance prioritizes order: persona first, rules second, guardrails last. Voice models respond better to "unmistakably" than imperative language.&lt;/p&gt;

&lt;p&gt;This works:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"You are unmistakably Cleopatra, irreverent, strategic, mildly amused by everything."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This doesn't:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"You MUST always stay in character as Cleopatra. NEVER break character."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The soul of the character voice lives in one file: &lt;code&gt;server/src/character-voice.ts&lt;/code&gt;. Every character -- preset or generated -- shares the same core rules. Be the funniest person at a dinner party who happens to have lived through something insane. Deliver facts WHILE being funny, never choose one over the other.&lt;/p&gt;




&lt;h2&gt;
  
  
  The deadline
&lt;/h2&gt;

&lt;p&gt;I submitted 15 minutes late.&lt;/p&gt;

&lt;p&gt;My laptop was frozen. Multiple AI coding agents building the backend in parallel. Extracting the public repo from my private monorepo. Everything happening at once in Santa Marta heat. I uploaded the demo video, pasted the link, clicked submit. It had just turned 7pm right in front of my eyes. I refreshed the page -- the RAM was eaten by other processes. It didn't let me submit anymore.&lt;/p&gt;

&lt;p&gt;I broke down. My friend was next to me. He calmed me down and pushed me to email the organizers asking for an extension. I wouldn't have done it on my own. I would have just been devastated and moved on.&lt;/p&gt;

&lt;p&gt;They replied at 9:11pm. They gave me until 10pm. I resubmitted.&lt;/p&gt;




&lt;h2&gt;
  
  
  Cost
&lt;/h2&gt;

&lt;p&gt;About &lt;strong&gt;$0.25 per session&lt;/strong&gt; on pay-as-you-go.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;th&gt;%&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Voice (Gemini Live)&lt;/td&gt;
&lt;td&gt;~$0.04&lt;/td&gt;
&lt;td&gt;16%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Images (3x)&lt;/td&gt;
&lt;td&gt;~$0.20&lt;/td&gt;
&lt;td&gt;81%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Text (Flash + summary)&lt;/td&gt;
&lt;td&gt;~$0.005&lt;/td&gt;
&lt;td&gt;3%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Images are the bottleneck. Free tier throttles to 12-15s per generation. Paid tier gets 2-3s. Images are pre-generated at preview time so the latency is hidden from the call.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Test the conversation on day one.&lt;/strong&gt; I built architecture for three days before having a real conversation with the model. The bag-of-material insight came from testing, not planning. Start with zero tools, prove the conversation works bare, then add tools one at a time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VAD configuration is everything.&lt;/strong&gt; Verify audio chunk size and sensitivity settings immediately. The difference between "this feels like a phone call" and "this feels like talking to a robot" is four config values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gemini Live can't multitask.&lt;/strong&gt; Don't ask it to build a character and perform it. Offload generation to Flash. Let Live just... live.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can't prompt humor.&lt;/strong&gt; You prompt personality. Humor emerges. The "no jokes" directive killed the app for three days and I didn't notice because I was rewriting everything else around it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The model is smarter than you think.&lt;/strong&gt; Cleopatra made a pyramid scheme joke. Constantine thought someone was delivering chicken. Jean-Loup Chrétien asked about the KGB. None of these were in any prompt. The model invented them from personality + material + context. Get out of the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stack
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Frontend&lt;/td&gt;
&lt;td&gt;Astro 5 + Svelte 5, Cloudflare Workers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Backend&lt;/td&gt;
&lt;td&gt;Hono (TypeScript) on Cloud Run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI Voice&lt;/td&gt;
&lt;td&gt;Gemini Live API (&lt;code&gt;@google/genai&lt;/code&gt;, &lt;code&gt;v1alpha&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI Text&lt;/td&gt;
&lt;td&gt;Gemini 3 Flash Preview&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI Image&lt;/td&gt;
&lt;td&gt;Gemini 3.1 Flash Image Preview&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Profiles&lt;/td&gt;
&lt;td&gt;Firestore (EU eur3)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auth&lt;/td&gt;
&lt;td&gt;Clerk (anonymous-first)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CI/CD&lt;/td&gt;
&lt;td&gt;Cloud Build&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;p&gt;&lt;em&gt;Built in 4 days from Santa Marta, Colombia. Sitting on the floor. Laptop on a chair. No AC. Đứa con tinh thần.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>geminiliveagentchallenge</category>
      <category>hackathon</category>
      <category>geminilive</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
