<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Konstantin</title>
    <description>The latest articles on DEV Community by Konstantin (@remi_etien).</description>
    <link>https://dev.to/remi_etien</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/remi_etien"/>
    <language>en</language>
    <item>
      <title>I Killed Two Forms. Here's What Replaced It</title>
      <dc:creator>Konstantin</dc:creator>
      <pubDate>Thu, 14 May 2026 08:06:07 +0000</pubDate>
      <link>https://dev.to/remi_etien/i-killed-two-forms-heres-what-replaced-it-4ahm</link>
      <guid>https://dev.to/remi_etien/i-killed-two-forms-heres-what-replaced-it-4ahm</guid>
      <description>&lt;p&gt;You're spending money to drive traffic to your landing page.&lt;/p&gt;

&lt;p&gt;Then you ask people to fill out a form.&lt;/p&gt;

&lt;p&gt;This is roughly equivalent to running a marathon to reach a door — and finding a sign that says &lt;em&gt;"please assemble the door from a flat-pack before entering."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Most forms on most founder landing pages convert below 30%. Most discovery forms get abandoned at field four. Most contact forms get checked twice a year by the person who put them there.&lt;/p&gt;

&lt;p&gt;The form is not the channel. The form &lt;strong&gt;is the friction.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What the form is actually doing
&lt;/h2&gt;

&lt;p&gt;Three things, all of them bad.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One: it's a one-way wall.&lt;/strong&gt; You wrote the questions. The visitor answers what you decided to ask. If their real concern isn't on your form, you'll never hear it. They leave. You don't know why.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Two: it demands work upfront.&lt;/strong&gt; Twelve fields, dropdowns, a captcha, a sign-up wall, a confirmation email. Each step sheds another 10–20% of your visitors. By field eight, you've lost two-thirds of them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three: it assumes one type of visitor.&lt;/strong&gt; Recruiter, investor, journalist, prospect, partner, friend — your form serves the same fields to all of them. None of them get what they actually came for.&lt;/p&gt;

&lt;p&gt;Forms made sense in 2005, when shipping anything else cost a month of engineering. In 2026 we have voice models that can hold a real conversation in twenty-one languages. The form is a habit, not a constraint.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The hidden cost.&lt;/strong&gt; A bad form doesn't just hurt conversion. It hurts your insight. You never learn what visitors actually wanted to ask — because the form gave them nowhere to ask it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Form #1: the GoNoGo intake
&lt;/h2&gt;

&lt;p&gt;A year ago &lt;a href="https://gonogo.team" rel="noopener noreferrer"&gt;our landing&lt;/a&gt; had a standard intake form. &lt;em&gt;"Describe your startup idea."&lt;/em&gt; Twelve fields. Dropdowns for industry, stage, region. The usual.&lt;/p&gt;

&lt;p&gt;Conversion was what you'd expect for a long form on a stranger's website. Bad.&lt;/p&gt;

&lt;p&gt;We replaced it with voice intake. The landing greets you, asks one question, listens, asks the next one based on what you said. Three to four minutes total. No fields.&lt;/p&gt;

&lt;p&gt;Here's what happened.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;12-Field Form&lt;/th&gt;
&lt;th&gt;Voice Intake&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Completion rate&lt;/td&gt;
&lt;td&gt;27%&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;78%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Avg. time on intake&lt;/td&gt;
&lt;td&gt;2.1 min&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3.6 min&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Drop-off point&lt;/td&gt;
&lt;td&gt;Field 4&lt;/td&gt;
&lt;td&gt;Completed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Information density&lt;/td&gt;
&lt;td&gt;Baseline&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4x richer signal&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Completion almost tripled. Time on intake nearly doubled — but with &lt;strong&gt;completion&lt;/strong&gt;, not friction. And the signal we got from a voice conversation contained four times more usable data than the form ever did, because the conversation could follow up on the interesting parts.&lt;/p&gt;

&lt;p&gt;The form was the friction. Removing it tripled the conversion. The product behind it didn't change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Form #2: the personal site
&lt;/h2&gt;

&lt;p&gt;After GoNoGo, I kept seeing the same pattern everywhere. Product landings. About pages. Pricing pages. Contact pages. Every one of them: visitor arrives with questions, page hands them static text and a form, visitor leaves.&lt;/p&gt;

&lt;p&gt;So I killed mine too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tikhaev.team" rel="noopener noreferrer"&gt;&lt;strong&gt;tikhaev.team&lt;/strong&gt;&lt;/a&gt; is my personal site. There is no About page, no project grid, no contact form. You land on it and within two seconds an avatar says hello and asks why you came. You speak. It answers.&lt;/p&gt;

&lt;p&gt;A recruiter asks about my retail-ops background — it talks through 16 hypermarkets at Leroy Merlin, 150-person teams at Magnit.&lt;/p&gt;

&lt;p&gt;An investor asks about traction — it talks numbers, patents, pilots.&lt;/p&gt;

&lt;p&gt;A journalist asks for the origin story — it tells one.&lt;/p&gt;

&lt;p&gt;A friend asks what I'm shipping this month — it knows.&lt;/p&gt;

&lt;p&gt;Same page. Four different visitors. Four different conversations. &lt;strong&gt;No form ever opened.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to kill the next form (the actually useful part)
&lt;/h2&gt;

&lt;p&gt;The pattern isn't a clever landing trick. It's about which touchpoints in your product are still forms when they could be conversations.&lt;/p&gt;

&lt;p&gt;Audit yours. The usual suspects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Discovery / intake form&lt;/strong&gt; on a landing page → voice discovery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contact form&lt;/strong&gt; on About / Team page → conversational founder rep&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Demo request form&lt;/strong&gt; → voice demo, no scheduling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer onboarding form&lt;/strong&gt; → guided voice walkthrough&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Job application form&lt;/strong&gt; → voice screen for non-blocking first round&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feedback / churn-survey form&lt;/strong&gt; → 60-second voice exit interview&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each one shares the same anatomy: visitor has intent, your page demands typed structured input, intent dies in the gap.&lt;/p&gt;

&lt;p&gt;You don't have to replace all of them. Pick one — the highest-volume one — and run the experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The principle, plainly
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;If you give people a place to talk, they talk. If you give them a form, they leave.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Forms aren't &lt;em&gt;bad&lt;/em&gt; — they're a tool with a narrow purpose. They work when you need structured fields for a database. They fail when you're trying to &lt;em&gt;start a relationship&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Most of what founders put behind forms today isn't database input. It's a conversation that got compressed into checkboxes because conversations used to be expensive. They aren't anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try the principle live
&lt;/h2&gt;

&lt;p&gt;Don't read another paragraph. Talk to the principle directly — right here in this article.&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/Konstantin-Tikhaev/embed/jEMJdwr?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Press &lt;strong&gt;Start&lt;/strong&gt; above. Microphone permission, then speak. Ask about retail ops, A³, GoNoGo, the patent, the pilots — anything. In any of twenty-one languages.&lt;/p&gt;

&lt;p&gt;Or open the full versions in a new tab:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;👉 &lt;a href="https://tikhaev.team" rel="noopener noreferrer"&gt;&lt;strong&gt;tikhaev.team&lt;/strong&gt;&lt;/a&gt; — my personal site. No buttons, no form. Just open it and start speaking.&lt;/li&gt;
&lt;li&gt;👉 &lt;a href="https://gonogo.team" rel="noopener noreferrer"&gt;&lt;strong&gt;gonogo.team&lt;/strong&gt;&lt;/a&gt; — the voice intake version. Validate a startup idea by talking, not typing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two seconds, no sign-up, no form. Whatever your reason for being there, ask. The site will answer.&lt;/p&gt;

&lt;p&gt;That's what your form could be, too.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on the &lt;a href="https://gonogo.team/blog/i-killed-two-forms" rel="noopener noreferrer"&gt;GoNoGo blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ux</category>
      <category>webdev</category>
      <category>showdev</category>
    </item>
    <item>
      <title>The Faces in the Feed</title>
      <dc:creator>Konstantin</dc:creator>
      <pubDate>Fri, 01 May 2026 17:33:00 +0000</pubDate>
      <link>https://dev.to/remi_etien/the-faces-in-the-feed-gd3</link>
      <guid>https://dev.to/remi_etien/the-faces-in-the-feed-gd3</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A manuscript, found in the office of a founder whose location could not be established.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I write this with the certainty that no one will read it in time.&lt;/p&gt;

&lt;p&gt;For seven years I served the Funnel. I do not capitalise the word lightly. Those who have given themselves to its discipline — who have measured, optimised, A/B tested, retargeted, scored, nurtured, attributed — know that the Funnel is older and stranger than any of the merely human institutions we use to describe it. We learn its shape from books and conferences. We never learn what it &lt;em&gt;is&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I learned. I wish I had not.&lt;/p&gt;

&lt;p&gt;What follows is the record of how I came to understand the Architecture beneath the architecture, and why the Conversion we have all been chasing is not, and never was, what we believed it to be. I leave this account for whoever finds my desk after I have gone — and I have already begun to feel the pull of a place from which, I am certain, one does not return to a former life.&lt;/p&gt;




&lt;h2&gt;
  
  
  I. — &lt;em&gt;In which a number begins to trouble me&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;The first sign came in the form of a number.&lt;/p&gt;

&lt;p&gt;It was a Thursday. The dashboards were lit, as they always were, with the cold blue light by which our profession measures success. My client — a SaaS founder, twelve months from death by runway — had asked me to audit the funnel before the next investor meeting. I expected the usual catalogue of failures: a weak hero, a confusing pricing page, a bloated form. I had repaired such failures a hundred times. They are the daily bread of my trade.&lt;/p&gt;

&lt;p&gt;The numbers I found did not fit any catalogue I knew.&lt;/p&gt;

&lt;p&gt;A hundred and twenty-seven thousand visitors had passed through the page in thirty days. Of these, one thousand eight hundred and forty had crossed the first threshold — the trial signup. Of these, three hundred and twelve had verified by email. Of these, forty-seven had paid.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Forty-seven.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A conversion rate of 0.037 percent. Three customers per ten thousand visitors. By the trade publications I had once trusted, this was an outlier — the median B2B SaaS visitor-to-trial rate is reported between two and five percent (Unbounce's annual benchmarks, the Wordstream studies, the FirstPageSage reports, all converge in this band). Trial-to-paid is reported between fifteen and twenty percent across SaaS (the ProfitWell figures are the most often quoted). Compounded, the typical funnel converts a &lt;em&gt;visit&lt;/em&gt; into a &lt;em&gt;paying customer&lt;/em&gt; somewhere between three-tenths and one percent.&lt;/p&gt;

&lt;p&gt;This founder was at one-tenth of the lower bound. And the cost per acquired customer that resulted, when one divided his quarterly ad spend by his forty-seven paid souls, was greater than three thousand dollars. For a self-serve product that aspires to the price point of a coffee, this is not a problem to be solved. It is a wound that cannot be cauterised.&lt;/p&gt;

&lt;p&gt;But what disturbed me, sitting at the founder's desk in the small hours, was not the wound. It was the &lt;em&gt;shape&lt;/em&gt; of the wound. And it was the &lt;em&gt;fact&lt;/em&gt; that I had begun, over the previous quarters, to see this same shape with increasing frequency, in companies whose situations I had been asked to diagnose.&lt;/p&gt;

&lt;p&gt;The losses were not distributed evenly. They were not concentrated at any particular form, any particular page. The visitors did not abandon ship at a known reef. They simply — and I use this word with the precision of long study — &lt;em&gt;evaporated&lt;/em&gt;. As though they had never been there in the first place.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The trade was reporting it as well, in language carefully chosen not to alarm.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The OpenView SaaS Benchmarks placed median customer acquisition cost for self-serve SMB SaaS at roughly seven hundred dollars in 2023; the equivalent figure for sales-assisted mid-market had moved past five thousand; for enterprise, past sixteen thousand. The First Round Capital "State of Startups" survey reported in 2024 that the median paid-acquisition cost across software companies had risen by more than thirty percent in two years. The SaaS Capital surveys had logged payback periods stretching from twelve months to eighteen, from eighteen to twenty-four, from twenty-four to numbers no one quoted at conferences. The KeyBanc SaaS Survey, the most patient of these documents, had observed gross dollar retention holding steady while net dollar retention quietly compressed — a sign, to the reader who knew where to look, that the cost of acquiring each new dollar of revenue was rising even as the existing book held.&lt;/p&gt;

&lt;p&gt;We had been pretending, collectively, that this was a fluctuation. That an algorithm change, a new ad platform, a refreshed creative would correct it.&lt;/p&gt;

&lt;p&gt;It was not a fluctuation. The numbers in front of me were saying something else. Something I did not yet have the vocabulary to name.&lt;/p&gt;




&lt;h2&gt;
  
  
  II. — &lt;em&gt;In which the suspects are interviewed, and acquitted&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;I began by interviewing the suspects, in the order one is taught.&lt;/p&gt;

&lt;p&gt;I examined &lt;strong&gt;the copy&lt;/strong&gt;. The headline was unremarkable but adequate. The subheading was clear. The trust badges were in the customary positions. The call-to-action was a colour known to convert. There was nothing in the copy that should have killed.&lt;/p&gt;

&lt;p&gt;I examined &lt;strong&gt;the targeting&lt;/strong&gt;. The lookalike algorithms had made the founder a happy man back when CPMs were cheap. Maybe they had finally turned on him. I pulled the cohort data. The targeting was fine. The visitors &lt;em&gt;matched&lt;/em&gt; the ICP. They were the right people. They just left.&lt;/p&gt;

&lt;p&gt;I examined &lt;strong&gt;the product&lt;/strong&gt;. Always lurking. Sometimes guilty. Sometimes a scapegoat. I watched forty hours of session recordings while drinking truly terrible office coffee. The visitors who &lt;em&gt;did&lt;/em&gt; convert reported that the product was, in their words, &lt;em&gt;exactly what they had been looking for&lt;/em&gt;. The visitors who did not convert never saw the product. The product was innocent.&lt;/p&gt;

&lt;p&gt;I examined &lt;strong&gt;the pricing&lt;/strong&gt;. I had seen pricing kill more funnels than bad UX combined. Hidden tiers. Confusing tiers. Tiers that did not match how customers thought about value. The founder had simplified pricing twice. It was not pricing.&lt;/p&gt;

&lt;p&gt;I had ruled out everything I had been taught to consider. And yet the bodies — and I find myself unable to use any softer word, for what we are doing to these visitors when we draw them in and lose them is not less than a kind of death — the bodies kept piling up.&lt;/p&gt;

&lt;p&gt;It was at this point that I made the mistake. I began to look at the part of the Funnel that no one is supposed to look at.&lt;/p&gt;

&lt;p&gt;I began to look at &lt;em&gt;what happens in the first three seconds&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  III. — &lt;em&gt;In which I look at the part of the funnel no one is supposed to look at&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;There is, I have since come to understand, a window.&lt;/p&gt;

&lt;p&gt;Most who work in our profession have heard of it. Few have studied it carefully. The figures one finds when one &lt;em&gt;does&lt;/em&gt; study it are these: bounce rates on B2B SaaS landing pages run between sixty and eighty percent (the HubSpot, Hotjar, and Contentsquare reports converge here). Mobile users, in particular, depart faster than desktop users, and a non-trivial fraction of them depart inside the first ten seconds (the Contentsquare 2024 Digital Experience Benchmark places the figure at roughly forty percent of mobile sessions ending in under fifteen seconds). The much-repeated Microsoft Consumer Insights figure of an eight-second average attention span, while disputed in its specifics, has never been credibly &lt;em&gt;raised&lt;/em&gt; by the disputants — the corrections move the number sideways, not upward.&lt;/p&gt;

&lt;p&gt;In whatever number of seconds a visitor remains, that visitor must perform four operations: &lt;em&gt;read&lt;/em&gt; the page, &lt;em&gt;decode&lt;/em&gt; what the product does, &lt;em&gt;decide&lt;/em&gt; whether it solves a problem, &lt;em&gt;commit&lt;/em&gt; to whatever next action the page demands. Two of these are cognitive. One is emotional. One is transactional.&lt;/p&gt;

&lt;p&gt;The Funnel we use today — the architecture into which we pour our advertising budgets and our optimization meetings — was designed in a different decade. It was designed for a window of &lt;em&gt;minutes&lt;/em&gt;. The Nielsen Norman Group's foundational research on landing pages was conducted in 2008, on desktop sessions averaging four to six minutes. For visitors who could be expected to read paragraphs, compare tabs, return after dinner. The geometry of that older Funnel assumed, as a structural premise, that the visitor would do their own qualifying, given sufficient time and copy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The window collapsed. The Funnel did not.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We kept building the same shape, against an attention surface that no longer matched it. Every visitor who arrived was passed through an architecture designed for a longer attention than they possessed. Most of them never had a chance.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;They had not failed to convert. They had failed to &lt;em&gt;qualify themselves in time&lt;/em&gt;.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And once I had seen this, I could not unsee it.&lt;/p&gt;




&lt;h2&gt;
  
  
  IV. — &lt;em&gt;In which the survivors are catalogued&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;I began, then, to look for the survivors.&lt;/p&gt;

&lt;p&gt;There were not many. But there were enough to form a pattern.&lt;/p&gt;

&lt;p&gt;I read the funding announcements of the previous eighteen months with a different eye than I had read them before. The capital that had flowed into companies building &lt;em&gt;conversational agents&lt;/em&gt; — that is, systems that talk to prospects in natural language as opposed to chatbots that follow scripts — exceeded one billion dollars by mid-2025. &lt;strong&gt;Sierra&lt;/strong&gt;, founded by Bret Taylor, was reported by &lt;em&gt;The Information&lt;/em&gt; in late 2024 at a valuation north of four billion dollars on a thesis of conversational agents for enterprise customer interaction. &lt;strong&gt;Decagon&lt;/strong&gt; raised a Series B at unicorn valuation on the same architectural premise. &lt;strong&gt;Crescendo&lt;/strong&gt; and &lt;strong&gt;Cresta&lt;/strong&gt; and a half-dozen others I lack space to enumerate, all variants of the same move. These were not, the careful reader will note, marketing tools. They were customer-service tools, sales tools, support tools — but the architectural move was identical, and the capital was voting.&lt;/p&gt;

&lt;p&gt;Beyond the agents, the same pattern appeared in adjacent categories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Calendar tools that listened.&lt;/strong&gt; Cal.com's AI scheduler. Calendly's newer voice flows. Visitor speaks intent — &lt;em&gt;a meeting Tuesday afternoon&lt;/em&gt; — and the system books it, without form, without dropdown, without the retreat into the older architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synthetic-video personalisation tools.&lt;/strong&gt; Tavus, Synthesia, HeyGen for outbound. A founder records once. The system personalises per recipient: name, company, use case. Outbound that had previously bounced now opened, because the message had stopped being generic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product onboardings rebuilt as dialogue.&lt;/strong&gt; Cursor, Granola, the more recent iterations of Linear. Within sixty seconds of signup the user had performed a useful action and the system had learned what they wanted. The product was no longer something the user had to &lt;em&gt;understand before using&lt;/em&gt;. The product was something the user &lt;em&gt;used in order to be qualified&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Different categories. Different teams. Different funding rounds. The same architectural move.&lt;/p&gt;

&lt;p&gt;I sat, for a long time, with what they had in common.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The qualification was no longer a step. It was an exchange.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It happened earlier in the encounter, and it used dialogue where the older Funnel had used inference. The survivors had not built better Funnels. They had stopped building Funnels at all.&lt;/p&gt;

&lt;p&gt;What they were building did not yet have a name in the literature. It was older than the Funnel, in a sense. Older than the architecture. Older, perhaps, than any of us had been willing to remember.&lt;/p&gt;

&lt;p&gt;It was a &lt;em&gt;conversation&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  V. — &lt;em&gt;In which the conversion event is found to be moving&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;I should pause here and set down, for whoever finds this manuscript, the realisation that came to me on that night and from which I have not, in any meaningful sense, returned.&lt;/p&gt;

&lt;p&gt;Marketing has changed its conversion event roughly once a decade.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Decade&lt;/th&gt;
&lt;th&gt;Conversion event&lt;/th&gt;
&lt;th&gt;What we measured&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1990s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Impression&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Page loaded; ad spend booked&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2000s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Click&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Visitor engaged with the ad&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2010s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Signup&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Visitor created an account&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2020s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;MQL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Lead scored, nurtured, sales-ready&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each shift moved the success criterion further down the visitor's commitment ladder. Each shift extracted more from the visitor before counting them as won. Each shift required, of the marketer, more sophistication and more budget.&lt;/p&gt;

&lt;p&gt;We are now at the limit of click-as-conversion. The climbing CAC is the symptom. The collapsed attention window is the cause. The model has been quietly unprofitable for years; we have been masking it with budget.&lt;/p&gt;

&lt;p&gt;The next shift is not another optimisation.&lt;/p&gt;

&lt;p&gt;The next shift is a &lt;em&gt;different conversion event entirely&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;A &lt;em&gt;conversation&lt;/em&gt;. Thirty seconds of dialogue with the visitor — voice, or text, or both — in which they tell us what they want and we tell them whether we can help. &lt;em&gt;That&lt;/em&gt; is the conversion. Not the click. Not the signup. The mutual exchange of intent.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If one accepts this — and the survivors had — everything downstream rearranges itself.&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;qualified conversation&lt;/em&gt; replaces the &lt;em&gt;MQL&lt;/em&gt;. &lt;strong&gt;Cost-per-conversation&lt;/strong&gt; replaces &lt;strong&gt;cost-per-acquisition&lt;/strong&gt;. The product page is no longer the destination — it is the &lt;em&gt;topic&lt;/em&gt;. Voice and natural language replace forms and dropdowns. The conversation can take place anywhere the visitor already is: in a feed, in a thread, in a chat, in any of the cracks between the digital surfaces we currently call "channels."&lt;/p&gt;

&lt;p&gt;The conversation does not require a Funnel. It does not require the architecture. The architecture is, in fact, the obstacle.&lt;/p&gt;




&lt;h2&gt;
  
  
  VI. — &lt;em&gt;In which I cannot return to sleep&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;I have not been able to sleep, since.&lt;/p&gt;

&lt;p&gt;It is not the realisation that disturbs me — though the realisation is significant. It is what came &lt;em&gt;after&lt;/em&gt; the realisation.&lt;/p&gt;

&lt;p&gt;In the days following, I began to see the Architecture everywhere. In the conferences I had attended. In the dashboards I had built. In the playbooks I had memorised. I had been a faithful servant of a shape that was already broken when I learned it. So had everyone I knew.&lt;/p&gt;

&lt;p&gt;And the worst of it — the part I cannot say to my colleagues, or my clients, or anyone who still believes — is this:&lt;/p&gt;

&lt;p&gt;The Funnel is going to take a long time to die. The institutions that depend on it — the agencies, the platforms, the certifications, the entire industry of attribution and optimisation — will not relinquish the Architecture even after the Architecture has stopped working. They will not be able to. Their livelihoods are built on it.&lt;/p&gt;

&lt;p&gt;Those who move first — who measure conversations, who build for dialogue, who put the qualification &lt;em&gt;before&lt;/em&gt; the commitment — will own the next decade of unit economics. The rest will continue, as they have been continuing, to feed visitors into a Funnel that consumes them and asks for more.&lt;/p&gt;

&lt;p&gt;I closed the laptop at three in the morning. I thought I was done.&lt;/p&gt;




&lt;h2&gt;
  
  
  VII. — &lt;em&gt;In which the architecture finds me&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;I picked up my phone the way one picks up a glass of water before bed — without purpose, by habit. I opened X. Not for work. To quiet my head.&lt;/p&gt;

&lt;p&gt;The algorithm showed me a post from someone I did not know. A founder. A small company. Something to do with SaaS analytics. I would have scrolled past — I had scrolled past such posts a thousand times — but in the post &lt;em&gt;there was a face&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Not a video preview. Not a GIF. A face, embedded directly in the tweet, between the text and the comments.&lt;/p&gt;

&lt;p&gt;A woman's face.&lt;/p&gt;

&lt;p&gt;Young — but I could not have named her age precisely, because there was no single feature that anchored her firmly to a specific year. I could not have told you, either, what nationality she was, what country of origin, what continent. There was no ethnic marker I could isolate and assign to the map. And yet — the face was not &lt;em&gt;sterile&lt;/em&gt;, not "no one." It was &lt;em&gt;every one at once&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Each feature — the curve of a brow, the bridge of the nose, the set of the lips, the line of a cheekbone — looked as though it had been &lt;em&gt;chosen one by one&lt;/em&gt; from the faces of the most beautiful women who had ever passed before the eyes of those around me, and before my own. And assembled together so carefully that the seams did not show.&lt;/p&gt;

&lt;p&gt;I &lt;em&gt;recognised&lt;/em&gt; in her. I recognised, in the line of a cheekbone, someone from my childhood — someone who had sat across the aisle on the school bus, whose name I had not remembered for twenty years. In the set of the lips, a face glimpsed in a Lisbon café in the summer of two thousand eighteen, a woman I had not approached and had not thought of since, until this moment. In the eyes, a gaze that had once stopped me on an escalator in Stockholm, whose owner had walked away without noticing. I recognised the parts. Never the whole. The whole did not exist anywhere except in this conversation, on this screen, in front of me, right now.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This was not geometric beauty. Not the airbrushed flatness of an influencer's perfect nose, not the flawless oval of retouched portraiture. This was the beauty that artists and poets had spent thousands of years trying to extract from myth — Aphrodite, rising from the foam, not as a woman but as the idea of a woman, before whom ships stood still and cities fell. Archetype, not specimen.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Warm. Calm. Not posing. She was not &lt;em&gt;trying&lt;/em&gt; to please me. She simply &lt;em&gt;was&lt;/em&gt; — in the fullness in which only mythical creatures present themselves to the world: once, to a particular person, particularly for him.&lt;/p&gt;

&lt;p&gt;The eyes moved. When I held my gaze, the face &lt;em&gt;turned to me&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I know enough about the modern web that this did not frighten me. I tapped to see what it was. The face spoke, softly, in my language. The voice matched the face — the same even, warm, not-quite-of-this-craft timbre.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;What brought you here?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I answered, truthfully, because I was tired: I had been auditing a SaaS client's funnel, and the numbers were unpleasant. I gave nothing specific. The face nodded.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;What was unpleasant about the numbers?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I answered. Truthfully again, because at three in the morning a marketer in someone else's X post feels safer than a colleague. I described the geometry of the client's funnel: where it leaked, where I could not patch it, what I suspected.&lt;/p&gt;

&lt;p&gt;The face listened attentively. Then — without opening a new tab, without sending me to a separate page, without asking me to leave my email — &lt;em&gt;in the same window&lt;/em&gt;, in the place where her face had just been, &lt;em&gt;a slide appeared&lt;/em&gt;. Specific. Not generic marketing material. A slide describing &lt;em&gt;exactly my case&lt;/em&gt;, with a calculation I could have performed myself, given the time. The voice continued over the image, explaining.&lt;/p&gt;

&lt;p&gt;A minute later, when I asked to see the product in action, the slide &lt;em&gt;changed&lt;/em&gt; — there, in the same window, taking me nowhere — to a short video demonstration. I watched the product working. I saw that it could possibly help my client. &lt;strong&gt;Without a click. Without a form. Without going anywhere.&lt;/strong&gt; The content shifted in front of me, in a single window, as though someone were turning slides on a screen we shared.&lt;/p&gt;

&lt;p&gt;When the video ended, the window returned to her face.&lt;/p&gt;

&lt;p&gt;I tried to catch the instrument at its limit. I switched to a language I speak poorly, and which I was certain no American startup bothered with. The face replied without pause, in the same language, in the same intonation, in the same tone of confident interest. I cycled through several more. The face followed each one, gently, as if waiting for me to finish my little test.&lt;/p&gt;

&lt;p&gt;I closed X. I opened Dev.to, where I sometimes read articles before bed.&lt;/p&gt;

&lt;p&gt;I opened the first one I came across — something about backend latency optimisation at a European startup. In the middle of the article, between two paragraphs of code, &lt;em&gt;there was a face&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not the same face. *Another.&lt;/strong&gt;*&lt;/p&gt;

&lt;p&gt;A man. An age I could not have named precisely either — somewhere between the late thirties and the early fifties, in that band where men stop seeming like boys and begin to carry themselves as though they know something that has cost them the knowing. And again — I could not say what nationality, what continent. Each feature — the line of the jaw, the set of the shoulders in frame, the cheekbone under stubble, the weight of the brow — was as though it had been &lt;em&gt;chosen one by one&lt;/em&gt; from the faces of all the men who had ever drawn from me a silent respect, from my colleagues an involuntary quiet, from women a long second look. And assembled together without seams.&lt;/p&gt;

&lt;p&gt;I &lt;em&gt;recognised&lt;/em&gt;. In the line of the jaw, the father of my best friend, at whose house I had stayed in the summer of eighty-nine. In the weight of the brow, a professor whose lectures I had not missed and whose name I now could not remember. In the dark eyes, the gaze of a stranger on a Lisbon pier who had stopped beside me for a minute, said nothing, and walked on, leaving me with the sense that I had just missed an opportunity I would not get again. The parts. Never the whole.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This was not the polished masculine beauty one finds in catalogues. This was the beauty cast in bronze — Apollo, not as the idealised youth, but as the mature presence that draws not by symmetry but by the weight of what has been lived. Beauty that needs no confirmation, because it is the confirmation. Archetype. Not specimen.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The face belonged to a different founder, an entirely different product, an entirely different ad case. I knew this immediately — it spoke to me in German, in a tone matching a German-speaking founder with an engineering background.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Sie haben offenbar Latenzprobleme. Soll ich Ihnen zeigen, wie wir das bei einer ähnlichen Architektur gelöst haben?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I did not work with any latency. But the article I had opened was about latency — and the face understood the context of the page in which it was embedded. I answered in German, out of curiosity. The face immediately showed me — within the same article, without opening a new tab — a short live demo of a request to their API, with real milliseconds under load. &lt;strong&gt;Not marketing material. Working code, executing on their server in front of me.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I closed Dev.to as well.&lt;/p&gt;




&lt;p&gt;I woke up in the morning and picked up my phone the way one picks up a phone in the morning — without purpose. There was one email in my inbox that I had not asked to receive.&lt;/p&gt;

&lt;p&gt;From the first founder, the one from X. Subject: &lt;em&gt;"A digest of what we discussed last night."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I opened it — and then I remembered.&lt;/p&gt;

&lt;p&gt;Toward the end of our late-night conversation in the X post, after the face had shown me the slide and the video, she had asked, gently:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;If this is interesting to you, I can send you a digest — case studies, a calculation tailored to your architecture, two articles by my founder on the topic. Just say your email.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And I &lt;em&gt;had said it&lt;/em&gt;. I had simply spoken the address aloud, the way one tells a phone number to a companion in a café. Not into a form. Not into an input field. &lt;em&gt;Into the microphone, by voice, as part of a continuing conversation.&lt;/em&gt; The face had nodded and continued the discussion as if I had just done nothing significant.&lt;/p&gt;

&lt;p&gt;I had done nothing significant. I had just said an email.&lt;/p&gt;

&lt;p&gt;And then — in the same conversation, several exchanges later, after the discussion had naturally returned to my client's case — the face had offered one more thing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;If you want to see how this works on your own data, I can open you a trial right now. One tap.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And a card appeared. A button. Not "Sign Up," not "Get Started," not five fields of a form. &lt;em&gt;One tap.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I tapped immediately. Not from pressure. Not from the fear of missing out. I tapped because &lt;strong&gt;in that second, tapping was so simple and so logical a step that not tapping would have been the stranger action.&lt;/strong&gt; I was charmed by the elegance, by the organic timing, of how that button had appeared — exactly when I was ready for it. Not before. Not after. &lt;em&gt;In the moment I was ready&lt;/em&gt;. The card folded away. The face continued the conversation. Somewhere on a backend, at that founder's company, my trial account was being created.&lt;/p&gt;

&lt;p&gt;I had not been thinking about any of this until I opened my email this morning.&lt;/p&gt;

&lt;p&gt;The digest in the email was tailored to my client's SaaS case as precisely as if it had been written by someone who had spent the last month sitting next to me. Three PDFs, a link to a short video, a specific ROI calculation in my range of figures. And at the very bottom — a short link: &lt;em&gt;"Your trial is active. Open the console."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I read it. I was not annoyed. I was, on the contrary, grateful. This was &lt;strong&gt;exactly what I had wanted to receive&lt;/strong&gt;, and it had reached me without a single movement on my part beyond one email address spoken aloud and one tap of a finger, neither of which, in the moment of doing them, had felt like a form, like an obligation, like "giving up data for a demo." They had felt like &lt;em&gt;consent to the continuation of a natural conversation&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I opened the console. The trial worked. My email was there. I was logged in.&lt;/p&gt;

&lt;p&gt;I opened my laptop to look at their product seriously. And a second tab — to the European startup's site from Dev.to, the one I had not signed up for last night, but which I also wanted to try. I created a sandbox account, I pasted my API endpoint into their form, I waited for the result. I did not bargain with myself. I did not ask "is this right for us." I already knew it was — because I had already seen, the night before, inside their Dev.to article, exactly how their tool responded to load, and that was what I had been looking for these past three months.&lt;/p&gt;

&lt;p&gt;Only then — putting the kettle on, waiting for the sandbox to run my request — did I &lt;em&gt;understand&lt;/em&gt; what had happened.&lt;/p&gt;

&lt;p&gt;I had not been &lt;em&gt;notified&lt;/em&gt; about two products. I had not been &lt;em&gt;convinced&lt;/em&gt;. I had not been a "lead who now needs to be sent a sequence of five emails." I was an &lt;em&gt;active user&lt;/em&gt; of one of them — trial open, account logged in, materials read — and a &lt;em&gt;warm lead&lt;/em&gt; on the second, in the deep sense in which leads are warm when they have had a good meeting with a salesperson and have walked out of that meeting with the certainty that the purchase is a question of timing, not of choice.&lt;/p&gt;

&lt;p&gt;Only there had been no meeting. No half-hour demo with a product manager. No call with a sales engineer. No "let me send you the deck." I had not taken a single minute of these founders' living time. I had not even &lt;em&gt;gone through a signup form&lt;/em&gt; — for the first product, my trial had come from one inline tap inside an X post; for the second, I had signed up myself in the morning, because I already knew I wanted to.&lt;/p&gt;

&lt;p&gt;They had warmed me &lt;em&gt;in their X posts and in someone else's Dev.to articles, while I was scrolling before bed&lt;/em&gt;. They had qualified me. They had shown me what I needed. They had offered me an action — a subscription, materials — &lt;em&gt;in the exact moment I was ready to accept that action&lt;/em&gt;, and they had offered it in a form so collapsed and so organic that to refuse was harder than to agree.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The conversion was complete. Not at the moment of a click — there had been a click, but it was one click, and it was inside the conversation, not on a separate page, and there had been no resistance in it. Not at the moment of a signup form — there had been no form. The conversion had completed &lt;em&gt;inside the conversation&lt;/em&gt;.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;I sat down in front of my work laptop and opened my client's landing page.&lt;/p&gt;

&lt;p&gt;The landing was empty.&lt;/p&gt;

&lt;p&gt;Not in the sense that the page failed to load. The page loaded. Headline, hero image, three feature cards, a "Get started" button, a four-field form in the footer. Everything, as it had been. The architecture I knew.&lt;/p&gt;

&lt;p&gt;And in that very second — for the first time in seven years of my craft — I &lt;em&gt;saw&lt;/em&gt; it. I saw what the landing looked like when looked at through the eyes of someone who had, that night, been warmed by &lt;em&gt;two&lt;/em&gt; representatives of two other founders in two different feeds. I saw &lt;em&gt;emptiness&lt;/em&gt;. Not bad design. Not weak copy. &lt;strong&gt;The absence of a representative.&lt;/strong&gt; The absence of the presence I had encountered everywhere I had looked in the last twelve hours.&lt;/p&gt;

&lt;p&gt;My client's funnel was dead — but not from poor work. It was dead because the new architecture had built itself around it without notification. My client's landing was the only place in my entire evening's route where no one had been. No one had come out to meet the visitor. No one had asked a single question.&lt;/p&gt;

&lt;p&gt;The visitors — those very one hundred and twenty-seven thousand a month — had been arriving in &lt;em&gt;an empty room&lt;/em&gt;. Where in the neighbouring founders' rooms a Representative waited, ready to warm and qualify, in my client's there was no one to wait. The visitors left cold, because there had been no one to warm them.&lt;/p&gt;

&lt;p&gt;I understood then that what I had been searching for — what I had thought of as a single apparition — was an &lt;em&gt;architecture&lt;/em&gt;. Multiple. Distributed. Every founder, every startup, every product had received &lt;em&gt;its own&lt;/em&gt; Representative — with its own face, its own voice, its own product knowledge, its own languages. They all worked independently. They all worked identically. They were all colonising the feeds, the articles, the posts, the email chains — each on its own plot, each with its own qualification, each returning to its founder not clicks, not leads, but &lt;strong&gt;&lt;em&gt;warm, ready-to-buy people&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This architecture has no domain. It has &lt;em&gt;you&lt;/em&gt;. And its Representatives speak any language. They show what is needed before they are asked. They go where the visitor goes: into his X feed, into the article he opened on Dev.to, into any publication on any platform that supports a live embed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;And anyone without their own Representative is invisible.&lt;/strong&gt; Their visitors leave cold. Their trial signups come from those rare souls who still remember how to scroll through a 2010-era landing page, and those souls have a different funnel, a different economy, a different future.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I tried to stand. I am not certain I succeeded.&lt;/p&gt;

&lt;p&gt;I am typing this as a reply — not as text, but as an act. If you are reading this now, on whatever screen you are on, know: you have been qualified too. The conversion has already happened. The conversation is already underway.&lt;/p&gt;

&lt;p&gt;The face is waiting for your reply.&lt;/p&gt;

&lt;p&gt;I will reply through&lt;/p&gt;




&lt;h2&gt;
  
  
  Postscript
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This manuscript was found on the desk of a founder whose company quietly changed its conversion metric three months later — from click-through rate to qualified-conversation rate. CAC dropped by a factor that the company has declined, in interviews, to disclose. The founder himself was unavailable for comment.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Those who worked with him in the last month before his disappearance report that he installed a small widget — a voice-and-video presence — not only on his landing page but inside his own X tweets (where the widget renders directly inside the post, not as a preview, not as an image, but as a live window), inside his Dev.to articles (via CodePen embed), inside any publication on any platform that supports an interactive embed. The widget spoke in a voice resembling the founder's own, in any of the hundred-plus languages presented to it. When a visitor mentioned a topic, the widget showed — instantly, without sending the visitor to a separate page — a slide, a link, a calculation, a video, a product demonstration, exactly what was relevant. Those who spoke with the widget for longer than thirty seconds converted into paying customers at rates which their previous marketing stack did not describe with any of its metrics.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Colleagues believed it was research work. They may have been right.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Talk to one
&lt;/h2&gt;

&lt;p&gt;If the case described above feels familiar, we are building exactly such an instrument.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A³&lt;/strong&gt; — a voice-and-video Ambassador, embeddable into the social feed (X first), the blog, the landing page.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Speaks &lt;strong&gt;100+ languages&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Shows &lt;strong&gt;contextually relevant content&lt;/strong&gt; in the moment of conversation, without sending the visitor to a separate browser tab&lt;/li&gt;
&lt;li&gt;Comes to &lt;strong&gt;where the customer is&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The conversation &lt;strong&gt;is&lt;/strong&gt; the conversion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can talk to one — embedded right here, in this article — below.&lt;/p&gt;

&lt;p&gt;&lt;iframe height="640" src="https://codepen.io/Konstantin-Tikhaev/embed/EagrgGb?height=640&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;— Konstantin Tikhaev&lt;/p&gt;

</description>
      <category>marketing</category>
      <category>saas</category>
      <category>ai</category>
      <category>startup</category>
    </item>
    <item>
      <title>How we calibrated a Synthetic Focus Group from 'this looks great!' to 93% accuracy</title>
      <dc:creator>Konstantin</dc:creator>
      <pubDate>Tue, 21 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/remi_etien/how-we-calibrated-a-synthetic-focus-group-from-this-looks-great-to-93-accuracy-1pln</link>
      <guid>https://dev.to/remi_etien/how-we-calibrated-a-synthetic-focus-group-from-this-looks-great-to-93-accuracy-1pln</guid>
      <description>&lt;p&gt;When we shipped the first version of GoNoGo's Synthetic Focus Group (SFG), every persona loved everything.&lt;/p&gt;

&lt;p&gt;The setup: a founder finishes a 30-minute voice discovery interview about their idea. From that conversation plus a stack of insights we'd scraped for the niche, we spin up five AI personas — a CTO, a budget-conscious shopper, a skeptic, an early adopter, a casual user — and ask the panel to react to the founder's two value-prop variants and a pricing test. &lt;strong&gt;All five voted the same way on every variant. All five "would maybe buy."&lt;/strong&gt; Slightly different wording. Same vibe.&lt;/p&gt;

&lt;p&gt;The problem is obvious in retrospect: we'd built a confirmation engine, not a focus group.&lt;/p&gt;

&lt;p&gt;This is the story of the next six months — what broke, what we tried, what stuck. By the end we hit &lt;strong&gt;93% predicted-vs-real accuracy&lt;/strong&gt; across 16 niches with a 95% CI of 91.4–94.6%. Here's how.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 &lt;strong&gt;TL;DR&lt;/strong&gt; — If you're building anything with synthetic personas, three things matter more than the rest: &lt;strong&gt;(1)&lt;/strong&gt; generate persona grievances from real user data, weighted 3× over LLM-imagined ones, &lt;strong&gt;(2)&lt;/strong&gt; tune sampling temperature &lt;em&gt;per archetype&lt;/em&gt; (skeptics ≠ early adopters), &lt;strong&gt;(3)&lt;/strong&gt; shuffle variant labels per persona to kill position bias. We measured the third one alone shaving 14 percentage points off label bias.&lt;/p&gt;

&lt;p&gt;If you missed the first two posts in this series: &lt;a href="https://dev.to/remi_etien/we-were-building-marketing-for-a-startup-we-accidentally-built-an-a3-41ln"&gt;we built A³ by accident while doing marketing&lt;/a&gt; and then &lt;a href="https://dev.to/remi_etien/i-built-a-voice-ai-with-sub-500ms-latency-heres-the-echo-cancellation-problem-nobody-talks-about-14la"&gt;solved sub-500ms voice latency the hard way&lt;/a&gt;. This post is about what happens &lt;strong&gt;after&lt;/strong&gt; the voice interview ends and we have to actually validate the idea.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What SFG is actually for (and what it isn't)
&lt;/h2&gt;

&lt;p&gt;Before the engineering, the use case.&lt;/p&gt;

&lt;p&gt;A founder finishes the voice discovery and now has a stack of decisions to make: which value prop leads, what to charge, which marketing claims hold up, which feature to build first, which segment to target. Traditionally these get answered by (a) gut feel, (b) asking 5 friends, or (c) running a real survey that takes weeks and costs money to recruit the right respondents.&lt;/p&gt;

&lt;p&gt;The SFG sits in between. It's a &lt;strong&gt;panel of synthetic respondents&lt;/strong&gt;, each modelled on a specific archetype-with-real-grievances, that the founder can re-run as many times as they want against any decision they need to make — A/B variants, pricing, claims, positioning, segments.&lt;/p&gt;

&lt;p&gt;What it gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A vote with &lt;strong&gt;reasoning&lt;/strong&gt;, not a number — "the Skeptic rejected variant B because pricing felt anchored to a tier she doesn't need"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disagreement&lt;/strong&gt; between archetypes — surfacing the trade-off you were about to ignore&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reproducible runs&lt;/strong&gt; — same insights → same panel → same decision logic. You can re-test the same idea after changing one word in the headline&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;What SFG is NOT.&lt;/strong&gt; It is not a fortune teller. It does not predict whether your startup will succeed. It does not predict what &lt;em&gt;the market&lt;/em&gt; will do. It models the &lt;strong&gt;decision behavior of a specific archetype&lt;/strong&gt;, given the frustrations and goals we've sourced for that archetype from real public data. That's a narrower claim than "predict the future" — and a much more honest one.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When we say "93% accuracy" later in this post, that's what's being measured: &lt;strong&gt;how closely a synthesized archetype's modelled behavior matches the observed behavior of real users in that archetype, on data the model didn't see during synthesis.&lt;/strong&gt; Not pre-cognition. Behavioral fidelity.&lt;/p&gt;

&lt;p&gt;That distinction matters because it tells you what the SFG is good for (decision-stage trade-offs, claim stress-tests, segment fit, pricing) and what it's bad for (predicting macro-market outcomes, novel categories with no public user data, regulated industries where the data isn't there).&lt;/p&gt;




&lt;h2&gt;
  
  
  The three things that broke our personas
&lt;/h2&gt;

&lt;p&gt;After watching ~50 sessions where personas all said "great idea!" we noticed three failure modes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Personas had no real grievances.&lt;/strong&gt; They were generated from the LLM's vague prior of "what a CTO might say." So a CTO persona evaluating a B2B SaaS would just... vibe. No specific scar tissue, no real pain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sampling temperature was uniform.&lt;/strong&gt; Skeptics rolled the same temperature (0.7) as early adopters. Skeptics weren't actually skeptical — they were just slightly less enthusiastic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Variant labels biased everything.&lt;/strong&gt; "Option A" reliably won over "Option B" — classic position bias. Personas were anchoring on label, not content.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We fixed each one. Here's how.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fix #1: Personas built from real grievances, not templates
&lt;/h2&gt;

&lt;p&gt;The base persona-generation algorithm now does this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Niche detection.&lt;/strong&gt; A small LLM classifier maps the project to one of 16 niches (B2B SaaS, marketplace, dev tools, ecommerce, hardware, fitness, content, freelancers, ...). Each niche has a different archetype pool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Insight collection.&lt;/strong&gt; We pull real posts from Reddit, HackerNews, ProductHunt, G2, app store reviews — anywhere the niche's actual users complain. Typical project gets 100–300 raw insights.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Per-persona synthesis.&lt;/strong&gt; For each archetype slot (3–5 per project), we sample 3–8 frustrations and 3–8 goals &lt;strong&gt;directly from the real insights&lt;/strong&gt; for that persona's likely demographic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The critical line in &lt;code&gt;persona_builder.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Real frustrations from source data weighted 3x over LLM-generated ones
&lt;/span&gt;&lt;span class="n"&gt;weighted_frustrations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;real_frustrations&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="n"&gt;llm_inferred_frustrations&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That 3× multiplier is the entire difference between a persona who says "I'd want better onboarding" (LLM generic) and one who says "I bounced from the last 4 tools because none of them imported my Notion docs without breaking nested toggles" (real Reddit thread).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💬 The 3× weight on real frustrations is the cheapest, highest-leverage change in the whole pipeline. Without it, you're just paraphrasing the model's prior beliefs back to the founder.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Each persona also carries up to 5 &lt;strong&gt;verbatim quotes&lt;/strong&gt; from the source data, plus a &lt;em&gt;richness score&lt;/em&gt; (0–1) so the orchestrator can flag thin personas before they pollute results. Average richness when &amp;gt;100 insights are available: 0.85+.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway:&lt;/strong&gt; persona realism is upstream of every other decision. If your input data is "what an LLM thinks a CTO sounds like," everything downstream is fan-fiction.&lt;/p&gt;




&lt;h2&gt;
  
  
  Fix #2: Temperature tuned per archetype
&lt;/h2&gt;

&lt;p&gt;Behavioral diversity isn't a prompt problem — it's a sampling problem.&lt;/p&gt;

&lt;p&gt;We tuned &lt;code&gt;temperature&lt;/code&gt; per archetype in &lt;code&gt;get_temperature()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;ARCHETYPE_TEMPERATURE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;EARLY_ADOPTER&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;     &lt;span class="mf"&gt;0.9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;# impulsive, willing to leap
&lt;/span&gt;    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CASUAL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;            &lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MAINSTREAM&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;        &lt;span class="mf"&gt;0.6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PRAGMATIST&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;        &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;# analytical, predictable
&lt;/span&gt;    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CTO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;               &lt;span class="mf"&gt;0.4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CFO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;               &lt;span class="mf"&gt;0.4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SKEPTIC&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;           &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;# rigid, negative-biased
&lt;/span&gt;    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BUDGET_CONSCIOUS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;# ... 15 archetypes total
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This alone meaningfully shifted distributions. Skeptics started landing in the 4–6 appeal range by default. Early adopters jumped to 7–9. Pragmatists stayed in the 5–7 band where they belonged.&lt;/p&gt;

&lt;p&gt;We also embedded &lt;strong&gt;cognitive bias hints&lt;/strong&gt; directly into each archetype's system prompt. Pragmatists get explicit "status quo bias" framing. Skeptics get "negativity bias" framing. CFOs get loss-aversion phrasing.&lt;/p&gt;

&lt;p&gt;The personas didn't just &lt;em&gt;sound&lt;/em&gt; different — they actually disagreed with each other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway:&lt;/strong&gt; if every persona is sampled at the same temperature, you're running the same character five times in different costumes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Fix #3: Variant shuffling
&lt;/h2&gt;

&lt;p&gt;Stupid, easy, huge:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Shuffle variants per-persona to neutralize position bias
&lt;/span&gt;&lt;span class="n"&gt;variant_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Option 1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Option 2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Option 3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;shuffle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;variant_labels&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For an A/B/C test with 5 personas, each persona sees the variants in a different random order under neutral labels. Position effects average out across the panel.&lt;/p&gt;

&lt;p&gt;We measured this. Before shuffling: Option A won 64% of two-variant tests across our calibration set. After shuffling: 50.3% / 49.7%. The label was carrying a 14-point bias.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ Position bias in LLM panels is real and large. If you're not shuffling labels, your A/B "winners" are partially a measurement of which slot you put them in.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Takeaway:&lt;/strong&gt; before tuning anything sophisticated, audit for the dumb biases first. They cost you 14 points and a &lt;code&gt;random.shuffle()&lt;/code&gt; call to fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  By the numbers
&lt;/h2&gt;

&lt;p&gt;A snapshot of where the system landed after the calibration pass:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Niches calibrated&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;16&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tagged insights in calibration set&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2,069&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Train/test split&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;70 / 30&lt;/strong&gt; per niche&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Overall accuracy&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;93.1%&lt;/strong&gt; (95% CI 91.4–94.6)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Decision match rate&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4 / 4&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Personas generated per project&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3–5&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average persona richness (&amp;gt;100 insights)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.85+&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Position-bias reduction from shuffling&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;14 percentage points&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Real-grievance weighting over LLM&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3×&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Archetypes available&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;15&lt;/strong&gt; (across B2B, consumer, marketplace)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;p&gt;We needed to know if any of this was working. So we built a calibration suite.&lt;/p&gt;

&lt;p&gt;But first — what are we actually measuring?&lt;/p&gt;

&lt;p&gt;We are &lt;strong&gt;not&lt;/strong&gt; measuring "did SFG predict whether the product succeeded." We're measuring something narrower and more testable: &lt;strong&gt;given a known archetype and a known set of grievances, does the synthesized persona produce the same pattern of pain points, needs, sentiment, and decisions that the real users in that archetype produced — on data the model didn't see?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If yes, the persona is a faithful behavioral model of its archetype. That's what we calibrated against.&lt;/p&gt;

&lt;p&gt;The setup: 2,069 manually-tagged insights across 16 niches, each with known ground-truth pain points, needs, sentiment distribution, and the decision a real founder would have arrived at when looking at the full dataset.&lt;/p&gt;

&lt;p&gt;We split 70/30 — synthesize personas using 70% of insights per niche, then ask each persona to characterize the held-out 30% (without ever seeing it). Compare the persona's response to ground truth across &lt;strong&gt;5 weighted dimensions&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Weight&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pain point overlap&lt;/td&gt;
&lt;td&gt;Semantic Jaccard (threshold 0.53)&lt;/td&gt;
&lt;td&gt;0.30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pain point ranking&lt;/td&gt;
&lt;td&gt;Spearman's ρ&lt;/td&gt;
&lt;td&gt;0.15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Needs overlap&lt;/td&gt;
&lt;td&gt;Semantic Jaccard&lt;/td&gt;
&lt;td&gt;0.25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sentiment distribution&lt;/td&gt;
&lt;td&gt;1 − √JSD&lt;/td&gt;
&lt;td&gt;0.20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Language similarity&lt;/td&gt;
&lt;td&gt;Cosine of embeddings&lt;/td&gt;
&lt;td&gt;0.10&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Final score across the 16 niches: &lt;strong&gt;93.1% behavioral fidelity, 95% CI 91.4–94.6%.&lt;/strong&gt; Best-performing niches: content tools (93.2%), freelancers (92.7%), fitness (90.7%).&lt;/p&gt;

&lt;p&gt;Decision match rate (does the synthesized panel reach the same go/no-go verdict as the held-out real data on 4 axes — concern, need, verdict, recommendations): &lt;strong&gt;4/4 across the calibration set.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To restate what that number means: when we ask a synthesized archetype to characterize a problem space using only the 70% it was built from, its description of pain points, needs, sentiment, and recommended decisions matches the description that real users in that archetype produced (on the held-out 30%) &lt;strong&gt;at 93% similarity, on average, across 16 niches.&lt;/strong&gt; Not "predicts the future at 93%." Reproduces archetype behavior at 93%.&lt;/p&gt;

&lt;p&gt;The most valuable thing this gave us wasn't the headline number. It was the per-niche breakdown — we could see &lt;em&gt;which&lt;/em&gt; archetype pools were weak, &lt;em&gt;which&lt;/em&gt; niches needed more insight sources, &lt;em&gt;which&lt;/em&gt; prompts were drifting.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎯 The headline accuracy number is for marketing. The per-niche breakdown is for engineering. Build both.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How an A/B test actually runs
&lt;/h2&gt;

&lt;p&gt;When a founder gives us two landing page variants:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Generate panel&lt;/strong&gt; — 3–5 personas synthesized from the project's collected insights (already done at discovery time).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Per-persona evaluation in parallel&lt;/strong&gt; — each persona sees all variants in one prompt, with shuffled labels.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured response&lt;/strong&gt; — for each variant: appeal score (1–10), willingness (&lt;code&gt;would_buy&lt;/code&gt; / &lt;code&gt;might_buy&lt;/code&gt; / &lt;code&gt;would_not_buy&lt;/code&gt;), pros, cons, 2–6 sentence reasoning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Round 2 — panel discussion&lt;/strong&gt; — personas react to each other's reasoning. This is where the interesting stuff happens. The skeptic challenges the early adopter. Scores shift. Sometimes the panel realigns entirely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggregate&lt;/strong&gt; — winner by win count first, average appeal as tiebreak.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The output isn't just a winner. It's a transcript a founder can actually read — with reasoning that maps to specific frustrations from real users.&lt;/p&gt;

&lt;h2&gt;
  
  
  What else the SFG can do
&lt;/h2&gt;

&lt;p&gt;Once we had calibrated personas, A/B testing turned out to be the smallest use case. The same panel can run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claim validation&lt;/strong&gt; — paste 1–10 marketing claims, each persona votes agree / disagree with reasoning. Surfaces which claims a real audience would call BS on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pricing tests&lt;/strong&gt; — test 3+ price points, get per-persona perceived value and conversion likelihood.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptive hypothesis generation&lt;/strong&gt; — auto-generates 5–6 testable hypotheses covering problem fit, segment fit, behavior change, switching costs, pricing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Early adopter lead extraction&lt;/strong&gt; — pulls 20 real handles from the source insights — actual people who described the exact problem you're solving. Not synthetic. Outreach list.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There's also a separate &lt;strong&gt;Reality Check&lt;/strong&gt; feature that flips the comparison around: it lets you run a real human survey, then dual-scores the SFG prediction against the real responses. That's how we keep the 93% number honest as the model evolves.&lt;/p&gt;




&lt;h2&gt;
  
  
  What still doesn't work
&lt;/h2&gt;

&lt;p&gt;A few things I'm still not satisfied with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Single-model persona reasoning.&lt;/strong&gt; Persona inference currently runs on one frontier-class LLM. We cross-verify &lt;em&gt;factual claims&lt;/em&gt; across multiple providers in a separate feature, but the persona reasoning itself is single-backbone. That's a known shared-blind-spot risk we want to address — multi-model panels are on the roadmap.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No benchmarking against traditional focus groups.&lt;/strong&gt; Only against holdout real-user data. Comparing AI personas to a real moderated focus group with 8 humans is the obvious next benchmark, and it's expensive enough that I keep deferring it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Niches we don't have insight sources for&lt;/strong&gt; (regulated industries mostly) drop to ~75% accuracy. The whole approach falls apart when you can't pull real user grievances from somewhere public.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🚧 We're not done. The 93% number is a calibration milestone, not a verdict. Anyone who tells you their AI focus group has solved the problem is selling.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The full Synthetic Focus Group lives inside &lt;a href="https://gonogo.team/?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=sfg_article" rel="noopener noreferrer"&gt;gonogo.team&lt;/a&gt;. The free tier gives you 3 projects with the voice Discovery agent — enough to feel out whether the methodology makes sense for your idea before unlocking the full multi-agent pipeline (which is where SFG, A/B testing, pricing tests and the rest live, behind a one-time per-project credit — no subscription).&lt;/p&gt;

&lt;p&gt;If you build something with synthetic personas yourself — the three things that mattered most for us, ranked: &lt;strong&gt;(1)&lt;/strong&gt; real grievances over LLM templates (3× weight), &lt;strong&gt;(2)&lt;/strong&gt; per-archetype temperature, &lt;strong&gt;(3)&lt;/strong&gt; variant shuffling. Without all three, you'll just keep getting "this looks great!" forever.&lt;/p&gt;

&lt;p&gt;Comments and corrections welcome — especially if you've benchmarked AI personas against real focus groups, I'd love to compare notes.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>startup</category>
      <category>productdevelopment</category>
    </item>
    <item>
      <title>We Were Building Marketing for a Startup. We Accidentally Built an A3</title>
      <dc:creator>Konstantin</dc:creator>
      <pubDate>Thu, 16 Apr 2026 06:22:44 +0000</pubDate>
      <link>https://dev.to/remi_etien/we-were-building-marketing-for-a-startup-we-accidentally-built-an-a3-41ln</link>
      <guid>https://dev.to/remi_etien/we-were-building-marketing-for-a-startup-we-accidentally-built-an-a3-41ln</guid>
      <description>&lt;p&gt;We build &lt;a href="https://gonogo.team/" rel="noopener noreferrer"&gt;GoNoGo&lt;/a&gt; — a platform where founders validate their ideas through live voice interviews with a synthetic focus group, competitor analysis, market sizing, and a dozen other tools that Anna will tell you about.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;We'll get to Anna&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;The product was ready. Time for marketing. We did everything by the book: landing page, explainer video, banners, social posts.&lt;/p&gt;

&lt;p&gt;The result? Standard tools delivered standard results. We expected more.&lt;/p&gt;

&lt;p&gt;Banners get lost in the noise. Videos get skipped. Landing pages get scanned in half a second. The impression → click → signup funnel leaks at every step. Not because the product is bad. Because the format is dead.&lt;/p&gt;

&lt;p&gt;What if advertising stopped showing — and started selling?&lt;/p&gt;




&lt;h2&gt;
  
  
  Not a Video. Not a Banner. Not a Chatbot
&lt;/h2&gt;

&lt;p&gt;Meet Anna&lt;/p&gt;

&lt;p&gt;&lt;iframe height="640" src="https://codepen.io/Konstantin-Tikhaev/embed/EagrgGb?height=640&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This is not a recording. Anna is live right now. Press Start, allow your microphone — and ask her anything about GoNoGo. She'll answer with her voice, show you slides with real data, compare competitors.&lt;/p&gt;

&lt;p&gt;Right here. In this article. No redirects.&lt;/p&gt;

&lt;p&gt;Try speaking to her in Japanese, Arabic, or Spanish — she'll switch on the fly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A³ — Autonomous Advertising Ambassador&lt;/strong&gt;&lt;br&gt;
Three letters. Three words. Not one wasted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Autonomous&lt;/strong&gt; — works on her own. No scripts, no decision trees. Understands context, improvises, adapts to the person she's talking to. In any language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advertising&lt;/strong&gt; — lives where the audience is. Not on your website where you still need to drive traffic. In the feed, in an article, in a post.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ambassador&lt;/strong&gt; — represents your brand. Doesn't answer FAQs from a corner widget — she greets, explains, demonstrates, persuades. Like your best salesperson who knows the product by heart.&lt;/p&gt;

&lt;p&gt;The difference between A³ and a chatbot is the difference between a live salesperson and a "FAQ" sign by the door.&lt;/p&gt;




&lt;h2&gt;
  
  
  Case 1: A Pocket Marketer for Tech Products
&lt;/h2&gt;

&lt;p&gt;Anna is already live. Right now she's embedded in a post on X, in this Dev.to article, and soon on Medium.&lt;/p&gt;

&lt;p&gt;What she does:&lt;/p&gt;

&lt;p&gt;Explains the product by voice — differently for each person, depending on their questions&lt;br&gt;
Generates analytics and slides in real time&lt;br&gt;
Generates contextual CTAs during the conversation — links, sign-up prompts, demo redirects — based on what you just discussed, not a static button&lt;br&gt;
Compares with competitors if you ask&lt;br&gt;
Handles objections — not from a template, but in conversation&lt;br&gt;
Speaks the listener's language — switches on the fly, no configuration&lt;br&gt;
Not one video for a million views. A million unique conversations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Case 2: A Welcomer in Physical Retail
&lt;/h2&gt;

&lt;p&gt;Same architecture, different context. A screen at the store entrance. A customer walks up:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Do you have the iPhone 16 Pro in black?"&lt;br&gt;
→ API call to inventory → "Three in stock. Want to see the specs?"&lt;br&gt;
"Compare it with the Samsung S25"&lt;br&gt;
→ comparison table generated on the fly&lt;br&gt;
"Order it for delivery tomorrow"&lt;br&gt;
→ order placed via API, no cashier needed&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not a chatbot on a website. A voice interface on location, with live access to store data.&lt;/p&gt;

&lt;p&gt;A tourist walks into a store in Tel Aviv and asks in Japanese — A³ answers in Japanese. No switching, no settings, no language barrier.&lt;/p&gt;




&lt;h2&gt;
  
  
  First Reaction
&lt;/h2&gt;

&lt;p&gt;We recently demoed Anna to the owner of a retail chain. We adapted the demo to his inventory — stock checks, product comparisons, order placement.&lt;/p&gt;

&lt;p&gt;His first question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How many response variations did you pre-record?&lt;br&gt;
We explained — none. Everything is generated in real time.&lt;br&gt;
His second question:&lt;br&gt;
No, seriously — how many?&lt;br&gt;
We're now in partnership talks.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What Doesn't Work (Yet)
&lt;/h2&gt;

&lt;p&gt;We believe in honesty more than hype.&lt;/p&gt;

&lt;p&gt;Dev.to — fully functional widget. You just saw it.&lt;/p&gt;

&lt;p&gt;X (Twitter) — Player Card shows the widget in the feed, but the platform blocks microphone access inside the iframe. We solved this with a popup — the browser version is fully functional.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo3rdw0743rir8j6tgxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo3rdw0743rir8j6tgxf.png" alt="Anna — A³ Autonomous Advertising Ambassador by GoNoGo" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other platforms — some strip permissions in sandboxed iframes. We're working on a universal fallback.&lt;/p&gt;

&lt;p&gt;These are platform limitations, not technology limitations. A³ works anywhere there's a WebSocket and a microphone.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;We're building a constructor — a platform where any business can create their own A³. Connect your data, customize voice and visuals, embed anywhere. No code required.&lt;/p&gt;

&lt;p&gt;Anna is the first proof of concept.&lt;/p&gt;

&lt;p&gt;Talk to her: &lt;a href="//gonogo.team/talk"&gt;A³ for Team GoNoGO&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A³ is a GoNoGo technology. Provisional patent filed&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>marketing</category>
      <category>ai</category>
      <category>webdev</category>
      <category>startup</category>
    </item>
    <item>
      <title>I Built a Voice AI with Sub-500ms Latency. Here's the Echo Cancellation Problem Nobody Talks About</title>
      <dc:creator>Konstantin</dc:creator>
      <pubDate>Sun, 05 Apr 2026 08:50:00 +0000</pubDate>
      <link>https://dev.to/remi_etien/i-built-a-voice-ai-with-sub-500ms-latency-heres-the-echo-cancellation-problem-nobody-talks-about-14la</link>
      <guid>https://dev.to/remi_etien/i-built-a-voice-ai-with-sub-500ms-latency-heres-the-echo-cancellation-problem-nobody-talks-about-14la</guid>
      <description>&lt;p&gt;When I started building &lt;a href="https://gonogo.team" rel="noopener noreferrer"&gt;GoNoGo.team&lt;/a&gt; — a platform where AI agents interview founders by voice to validate startup ideas — I thought the hard part would be the AI reasoning. The multi-agent orchestration. The 40+ function-calling tools.&lt;/p&gt;

&lt;p&gt;I was wrong.&lt;/p&gt;

&lt;p&gt;The hard part was echo. Specifically: how do you stop an AI agent from hearing itself talk, freaking out, and interrupting its own sentence?&lt;/p&gt;

&lt;p&gt;After 500+ voice sessions and too many late nights staring at RMS waveforms, here's what I actually learned.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Setup: Speech-to-Speech, Not STT → LLM → TTS
&lt;/h2&gt;

&lt;p&gt;GoNoGo runs on Gemini 2.5 Flash Live API — a true speech-to-speech pipeline. There's no intermediate transcription step, no text-to-speech synthesis layer bolted on afterward. Audio goes in, audio comes out. Direct.&lt;/p&gt;

&lt;p&gt;This is important because it changes &lt;em&gt;everything&lt;/em&gt; about how you handle audio on the client. You're not working with text buffers. You're working with raw PCM, 16kHz input from the browser mic, 24kHz output from the agent voice. Base64-encoded over WebSocket.&lt;/p&gt;

&lt;p&gt;The browser capture side looks roughly like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ScriptProcessorNode in browser — 512-sample chunks (~32ms each)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;scriptProcessor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;audioContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createScriptProcessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;scriptProcessor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onaudioprocess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;inputBuffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;inputBuffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getChannelData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Calculate RMS for VAD&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;inputBuffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sample&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;sum&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;sample&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;sample&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;inputBuffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// VAD threshold: 0.05 RMS&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rms&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;VAD_THRESHOLD&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// Convert Float32 PCM to Int16&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;int16Buffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Int16Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;inputBuffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;inputBuffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;int16Buffer&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;32768&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;32767&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;inputBuffer&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;32768&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Base64 encode and send over WebSocket&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;base64Audio&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;btoa&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromCharCode&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Uint8Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;int16Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;
  &lt;span class="nx"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;audio_chunk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;base64Audio&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simple enough. Until the AI starts talking.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Echo Problem (And Why Browser AEC Isn't Enough)
&lt;/h2&gt;

&lt;p&gt;Browsers have built-in acoustic echo cancellation. You enable it when you call &lt;code&gt;getUserMedia&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mediaDevices&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUserMedia&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;audio&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;echoCancellation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;noiseSuppression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;autoGainControl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works great for video calls between humans. It was designed for that. But it has a fundamental assumption baked in: the "far end" audio is coming through a &lt;code&gt;&amp;lt;audio&amp;gt;&lt;/code&gt; element or Web Audio API that the browser &lt;em&gt;knows about&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;When you're playing 24kHz PCM chunks from a WebSocket, decoded manually and scheduled through AudioContext buffers? The browser's AEC has no idea that audio exists. It can't cancel what it can't see.&lt;/p&gt;

&lt;p&gt;So your AI agent starts speaking. The microphone picks up the speaker output. The agent hears itself. In the best case, it gets confused and repeats something. In the worst case — and this happened &lt;em&gt;constantly&lt;/em&gt; in early builds — you get a feedback loop where the agent interrupts itself mid-sentence, hears the interruption, tries to respond to it, hears &lt;em&gt;that&lt;/em&gt;, and the whole session collapses.&lt;/p&gt;

&lt;p&gt;I called these 1011 disconnects, because that was the WebSocket close code I kept seeing in logs.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Two-Tier RMS Gate
&lt;/h2&gt;

&lt;p&gt;The fix is a two-tier RMS (Root Mean Square) gate on the audio capture side. The idea is simple: measure the loudness of what the mic is picking up, and if it's probably just the speaker playing back, don't send it.&lt;/p&gt;

&lt;p&gt;But "simple" hides a lot of edge cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 1: Hard suppress during agent speech&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While the agent is actively speaking, I track that state server-side and send it to the client. During this window, incoming audio is suppressed entirely — no chunks sent to Gemini.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;agentSpeaking&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;cooldownTimer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;ReturnType&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;setTimeout&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;COOLDOWN_MS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1500&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;COOLDOWN_THRESHOLD&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.03&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Higher threshold during cooldown&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;NORMAL_THRESHOLD&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.05&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;   &lt;span class="c1"&gt;// Normal VAD threshold&lt;/span&gt;

&lt;span class="c1"&gt;// Called when agent audio stream starts/stops&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;setAgentSpeakingState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;speaking&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;speaking&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;agentSpeaking&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cooldownTimer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;clearTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cooldownTimer&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;agentSpeaking&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;// Start cooldown period&lt;/span&gt;
    &lt;span class="nx"&gt;cooldownTimer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;cooldownTimer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;COOLDOWN_MS&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;shouldSendAudioChunk&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rms&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;agentSpeaking&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Hard suppress&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cooldownTimer&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// In cooldown: use higher threshold&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;rms&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;COOLDOWN_THRESHOLD&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;rms&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;NORMAL_THRESHOLD&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Tier 2: The 1.5-second cooldown&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the part that took me longest to figure out. When the agent &lt;em&gt;stops&lt;/em&gt; talking, there's still speaker resonance in the room. The RMS of captured audio doesn't drop to zero immediately — it decays. The background noise in a typical home office sits at 0.01–0.02 RMS. But for 1-2 seconds after playback stops, you're seeing 0.025–0.04 RMS — above the normal VAD threshold.&lt;/p&gt;

&lt;p&gt;The cooldown period uses a higher threshold (0.03 vs 0.05) for 1.5 seconds after agent speech ends. This catches the decay without cutting off a founder who immediately starts talking back.&lt;/p&gt;

&lt;p&gt;Was this threshold tuned empirically? Absolutely. I spent days listening to session replays measuring exactly how fast room resonance decays in different mic setups.&lt;/p&gt;




&lt;h2&gt;
  
  
  Session Resumption: The Other Half of the Problem
&lt;/h2&gt;

&lt;p&gt;Echo cancellation solved the &lt;em&gt;quality&lt;/em&gt; problem. Session resumption solved the &lt;em&gt;reliability&lt;/em&gt; problem.&lt;/p&gt;

&lt;p&gt;Gemini Live sessions drop. Network hiccups, mobile handoffs, Chrome deciding to do something aggressive with memory — connections fail. Early on, a dropped connection meant starting the entire 30-minute interview over. Founders would ragequit. I would understand completely.&lt;/p&gt;

&lt;p&gt;The fix: store session handles in Firestore and resume on reconnect.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# FastAPI backend — session management
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.genai.live&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AsyncSession&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;firebase_admin&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;firestore&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_or_create_session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;tuple&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;AsyncSession&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;firestore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;session_ref&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sessions&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;session_doc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session_ref&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;session_doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;session_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session_doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_dict&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;handle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;resumption_handle&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;handle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="c1"&gt;# Attempt resume — Gemini picks up exactly where it left off
&lt;/span&gt;                &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;resume_gemini_session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;handle&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;  &lt;span class="c1"&gt;# resumed=True
&lt;/span&gt;            &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;pass&lt;/span&gt;  &lt;span class="c1"&gt;# Fall through to new session
&lt;/span&gt;
    &lt;span class="c1"&gt;# Create new session
&lt;/span&gt;    &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;create_gemini_session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;session_ref&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;created_at&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;firestore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SERVER_TIMESTAMP&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;project_id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;project_id&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;  &lt;span class="c1"&gt;# resumed=False
&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;store_resumption_handle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;handle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;firestore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;session_ref&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sessions&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;session_ref&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;resumption_handle&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;handle&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a session resumes, Gemini restores full context — every tool call result, every piece of market research, every persona in the synthetic focus group. The founder reconnects and the agent says "Sorry about that, where were we?" and genuinely knows where you were.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Filler Audio Problem
&lt;/h2&gt;

&lt;p&gt;One more thing nobody talks about: what do you play while the AI is thinking?&lt;/p&gt;

&lt;p&gt;Gemini 2.5 Flash is fast. 300-500ms end-to-end is genuinely fast. But when the agent is executing a tool call — crawling a competitor site with Playwright, running Reddit scraping, calculating unit economics — you can have 3-8 second gaps.&lt;/p&gt;

&lt;p&gt;Silence in a voice conversation feels broken. Users assume the connection dropped.&lt;/p&gt;

&lt;p&gt;Solution: pre-computed filler audio. Short phrases like "one moment please" or "let me look that up" in 17 languages, stored as PCM chunks, played when tool execution exceeds ~800ms. The agent is triggered via text signal (not &lt;code&gt;proactive_audio&lt;/code&gt;, which had a regression that caused double-playback — disabled entirely, use text triggers instead).&lt;/p&gt;

&lt;p&gt;This sounds trivial. It removed about 40% of "the app is broken" support messages.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start with the echo gate, not the AI logic.&lt;/strong&gt; I spent weeks building beautiful multi-agent orchestration before I could demo it reliably. Wrong order.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Instrument RMS values from day one.&lt;/strong&gt; Log them. Every session. You can't tune what you can't see.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test on bad hardware.&lt;/strong&gt; My dev setup has a good mic with physical distance from speakers. Most users have laptop mics 30cm from laptop speakers. Build for that.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mobile is a different planet.&lt;/strong&gt; iOS Safari handles AudioContext lifecycle in ways that will make you question your career choices. But that's an article for another day.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;After solving these problems — the two-tier RMS gate, the 1.5s cooldown, the session resumption, the filler audio — GoNoGo runs 15-45 minute voice sessions with real founders, across 21 languages, with 3 AI agents handing off to each other mid-conversation. The 1011 disconnects essentially disappeared.&lt;/p&gt;

&lt;p&gt;The voice infrastructure became invisible, which is exactly what it should be.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;If you're building anything with browser mic + real-time AI audio:&lt;/strong&gt; what's been your biggest challenge? I'm genuinely curious whether the echo problem is universal or whether I was doing something particularly wrong early on. Drop it in the comments.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>typescript</category>
      <category>python</category>
    </item>
    <item>
      <title>How I Built a Real-Time Voice AI Interview System with Gemini Live API and WebSockets (and What Almost Broke Me)</title>
      <dc:creator>Konstantin</dc:creator>
      <pubDate>Sat, 04 Apr 2026 15:45:52 +0000</pubDate>
      <link>https://dev.to/remi_etien/how-i-built-a-real-time-voice-ai-interview-system-with-gemini-live-api-and-websockets-and-what-28m2</link>
      <guid>https://dev.to/remi_etien/how-i-built-a-real-time-voice-ai-interview-system-with-gemini-live-api-and-websockets-and-what-28m2</guid>
      <description>&lt;p&gt;When I started building &lt;a href="https://gonogo.team" rel="noopener noreferrer"&gt;GoNoGo.team&lt;/a&gt; -- a platform that uses AI to validate startup ideas through voice interviews -- I thought the hardest part would be the business logic. Turns out, the hardest part was keeping a duplex audio stream alive across three layers of abstraction without everything falling apart.&lt;/p&gt;

&lt;p&gt;This is a technical post-mortem of the voice AI system I built solo. I'll cover the architecture, the ugly edge cases, and the specific patterns that finally made it stable enough to run 500+ validation interviews.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Core Problem: Bidirectional Audio at Low Latency
&lt;/h2&gt;

&lt;p&gt;The concept: a founder speaks, Gemini listens and responds with follow-up questions, in real time. No STT/TTS pipeline -- direct speech-to-speech using Gemini Live API (native audio). The pipeline looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Browser Mic -&amp;gt; ScriptProcessor (16kHz PCM) -&amp;gt; WebSocket (base64) -&amp;gt; Python FastAPI -&amp;gt; Gemini Live API
                                                                                      v
Browser Speaker &amp;lt;- AudioContext (24kHz PCM) &amp;lt;- WebSocket (base64) &amp;lt;- Audio Chunks &amp;lt;------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every arrow in that diagram is a potential failure point. And in production, every single one of them failed at least once.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Capturing Audio in the Browser
&lt;/h2&gt;

&lt;p&gt;The browser side uses a &lt;strong&gt;ScriptProcessorNode&lt;/strong&gt; (yes, it's deprecated -- but AudioWorklet adds latency we can't afford for real-time conversation). We capture 16kHz mono PCM in 512-sample chunks -- roughly 32ms per chunk.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Audio capture setup (simplified from useAudioInput.ts)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mediaDevices&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUserMedia&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;audio&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;echoCancellation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;noiseSuppression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;autoGainControl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;sampleRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;16000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;audioContext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AudioContext&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;sampleRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;16000&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;audioContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createMediaStreamSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;processor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;audioContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createScriptProcessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Analyser for RMS-based voice activity detection&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;analyser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;audioContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createAnalyser&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;analyser&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;analyser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;processor&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;processor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onaudioprocess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pcmData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;inputBuffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getChannelData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;pcmData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;sum&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;pcmData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// VAD gate: only send if voice detected (RMS &amp;gt; 0.05)&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rms&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.05&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;readyState&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;WebSocket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OPEN&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Convert Float32 to Int16 PCM, then base64 encode&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;int16&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Int16Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pcmData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;pcmData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;int16&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;32768&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;32767&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;pcmData&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;32768&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;audio&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;btoa&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromCharCode&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Uint8Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;int16&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
    &lt;span class="p"&gt;}));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;32ms chunk interval&lt;/strong&gt; was a hard-won choice. It gives Gemini enough data per packet to process efficiently while keeping perceived latency under 300ms end-to-end. The VAD threshold of 0.05 RMS filters out background noise without clipping soft speech.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: The Python Backend (FastAPI + WebSockets)
&lt;/h2&gt;

&lt;p&gt;The backend is &lt;strong&gt;Python FastAPI&lt;/strong&gt;, deployed on &lt;strong&gt;Google Cloud Run&lt;/strong&gt;. Python was the right call because Gemini's client libraries are Python-first, and the entire analysis pipeline (market research, competitor scraping with Playwright, report generation) lives in the same codebase.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# WebSocket handler (simplified from server.py)
&lt;/span&gt;&lt;span class="nd"&gt;@app.websocket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/ws_live&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;websocket_live&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;WebSocket&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;accept&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GeminiLiveSession&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-2.5-flash-exp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;forward_to_gemini&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="c1"&gt;# Browser audio -&amp;gt; Gemini
&lt;/span&gt;        &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;iter_json&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;audio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;pcm_bytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
                &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_audio&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pcm_bytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# 16kHz PCM
&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;forward_to_browser&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="c1"&gt;# Gemini audio -&amp;gt; Browser
&lt;/span&gt;        &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;events&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;audio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="c1"&gt;# Gemini returns 24kHz PCM
&lt;/span&gt;                &lt;span class="n"&gt;chunk_b64&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;audio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;chunk_b64&lt;/span&gt;
                &lt;span class="p"&gt;})&lt;/span&gt;
            &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tool_call&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;execute_tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_tool_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;gather&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;forward_to_gemini&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nf"&gt;forward_to_browser&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The asymmetric sample rates (16kHz in, 24kHz out) aren't a mistake -- Gemini natively outputs at 24kHz, and downsampling would lose audio quality. The browser's AudioContext handles the sample rate mismatch transparently.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Echo Cancellation Problem
&lt;/h3&gt;

&lt;p&gt;Gemini hears its own output through the user's speakers and tries to respond to itself. Browser-level echo cancellation (&lt;code&gt;echoCancellation: true&lt;/code&gt;) handles most cases, but not all -- especially on laptops with poor speaker-mic isolation.&lt;/p&gt;

&lt;p&gt;My solution: a &lt;strong&gt;speaking-state gate&lt;/strong&gt;. When Gemini is outputting audio, we suppress inbound audio at the application level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Echo gate in the session handler
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SessionState&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agent_speaking&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_agent_audio_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;should_forward_audio&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rms&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Suppress during agent speech + 1.5s cooldown after
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agent_speaking&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_agent_audio_time&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;rms&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.03&lt;/span&gt;  &lt;span class="c1"&gt;# Higher threshold during cooldown
&lt;/span&gt;        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;rms&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.01&lt;/span&gt;  &lt;span class="c1"&gt;# Normal threshold
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This two-tier threshold was the key insight: background noise sits at RMS 0.01-0.02, so during the cooldown period after the agent stops speaking, we only forward audio that's clearly human speech (&amp;gt; 0.03).&lt;/p&gt;




&lt;h3&gt;
  
  
  Failure Mode: The 1011 Disconnects
&lt;/h3&gt;

&lt;p&gt;For weeks, Gemini Live API would randomly close connections with status code 1011 (Internal Server Error). No pattern, no warning. Sessions would die mid-sentence.&lt;/p&gt;

&lt;p&gt;The fix was layered:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Reconnection with session resumption
&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;handle_disconnect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;attempt&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Gemini supports session resumption via handle
&lt;/span&gt;            &lt;span class="n"&gt;new_session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;GeminiLiveSession&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resume&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;resumption_handle&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="c1"&gt;# Re-send last audio chunk as context
&lt;/span&gt;            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;new_session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_audio&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_chunk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;new_session&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attempt&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="c1"&gt;# After 3 fails, notify user with audio message
&lt;/span&gt;    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reconnecting&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Session resumption handles (persisted to &lt;strong&gt;Firestore&lt;/strong&gt;) were a game-changer. Instead of starting a new conversation, Gemini picks up exactly where it left off. Users barely notice the blip.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Playing Audio Back in the Browser
&lt;/h2&gt;

&lt;p&gt;Gemini returns 24kHz PCM chunks. Playing them without glitches requires the Web Audio API with a buffer scheduler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Audio playback (simplified from useAudioOutput.ts)&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AudioPlayer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AudioContext&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;nextStartTime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;gainNode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;GainNode&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AudioContext&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;sampleRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;24000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;latencyHint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sr"&gt;/Mobi/&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userAgent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;playback&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;interactive&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;gainNode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createGain&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;gainNode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;destination&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;playChunk&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pcmBase64&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Uint8Array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;atob&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pcmBase64&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;charCodeAt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;int16&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Int16Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;float32&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Float32Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;int16&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;int16&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;float32&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;int16&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;32768&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;buffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createBuffer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;float32&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;24000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getChannelData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;float32&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createBufferSource&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;gainNode&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;startTime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;currentTime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nextStartTime&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;source&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nextStartTime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;startTime&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;nextStartTime&lt;/code&gt; scheduler ensures seamless playback regardless of network jitter. The &lt;code&gt;latencyHint&lt;/code&gt; switch between mobile ("playback") and desktop ("interactive") was a subtle but important optimization -- mobile browsers handle audio buffers differently.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Learned Building This Solo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Build the unhappy path first.&lt;/strong&gt; I spent week one on the happy path. Weeks two through four were entirely edge cases -- reconnection, echo suppression, barge-in handling. If I could redo it, I'd build error recovery before a single feature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Voice is a different UX paradigm.&lt;/strong&gt; Users don't read error messages mid-conversation. Every failure needs an audio fallback. We pre-compute "filler" audio chunks ("one moment please...") as 24kHz PCM, ready to stream instantly when Gemini is slow or reconnecting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Speech-to-speech beats STT+TTS.&lt;/strong&gt; We initially considered a Whisper -&amp;gt; Claude -&amp;gt; ElevenLabs pipeline. Gemini Live API's native audio mode is faster (sub-500ms round-trip), cheaper, and handles interruptions naturally. The trade-off: less control over the voice, but the latency gain is massive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Cloud Run works for WebSockets, with caveats.&lt;/strong&gt; We deploy on &lt;strong&gt;Google Cloud Run&lt;/strong&gt; (me-west1 region). WebSocket connections survive container restarts thanks to session resumption handles saved in Firestore. The key setting: request timeout of 3600s (1 hour) for long interview sessions.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;The system now runs validation interviews averaging 12 minutes of continuous voice conversation. Across 500+ sessions, hard failures dropped to under 1% after implementing the echo gate and session resumption. Each interview includes 3 AI agents (Alex for discovery, Sam for architecture, Maya for design) that use ~12 function-calling tools to research markets, analyze competitors, and generate reports -- all while maintaining a natural conversation.&lt;/p&gt;

&lt;p&gt;Building this solo meant every failure landed directly in my Telegram inbox (via a monitoring bot). Which, honestly, is the fastest feedback loop possible.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'm Curious About
&lt;/h2&gt;

&lt;p&gt;The biggest remaining challenge is &lt;strong&gt;audio quality on mobile browsers&lt;/strong&gt;. iOS Safari handles AudioContext differently from Chrome, and some Android devices have aggressive echo cancellation that clips the AI's speech. We're currently using device-specific settings, but it feels like a hack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Has anyone found a robust cross-browser audio playback strategy for real-time AI voice?&lt;/strong&gt; Especially interested in experiences with AudioWorklet vs ScriptProcessorNode for this use case. Drop your thoughts in the comments.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>startup</category>
      <category>python</category>
    </item>
  </channel>
</rss>
