<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ANBMZ L</title>
    <description>The latest articles on DEV Community by ANBMZ L (@anbmz_llc_1253b3cd322ff8e).</description>
    <link>https://dev.to/anbmz_llc_1253b3cd322ff8e</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anbmz_llc_1253b3cd322ff8e"/>
    <language>en</language>
    <item>
      <title>I Analyzed Real Mock Coding Interviews. Here's What Separates Hires from No Hires</title>
      <dc:creator>ANBMZ L</dc:creator>
      <pubDate>Thu, 09 Apr 2026 20:08:59 +0000</pubDate>
      <link>https://dev.to/anbmz_llc_1253b3cd322ff8e/i-analyzed-real-mock-coding-interviews-heres-what-separates-hires-from-no-hires-33d1</link>
      <guid>https://dev.to/anbmz_llc_1253b3cd322ff8e/i-analyzed-real-mock-coding-interviews-heres-what-separates-hires-from-no-hires-33d1</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9rw3yii4xw7f742kqpn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9rw3yii4xw7f742kqpn.png" alt=" " width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We run AI mock coding interviews on &lt;a href="https://intervu.dev" rel="noopener noreferrer"&gt;intervu.dev&lt;/a&gt;. Each session is scored across five pillars (algorithms, coding, problem solving, verification, and communication) on a calibrated 1-10 rubric.&lt;/p&gt;

&lt;p&gt;We looked at the anonymized, aggregate patterns across completed interviews: how candidates spend their time, how much code they write, and where they lose points.&lt;/p&gt;

&lt;p&gt;Only &lt;strong&gt;52% received a Hire or Strong Hire signal.&lt;/strong&gt; The reason isn't what most candidates expect.&lt;/p&gt;




&lt;h2&gt;
  
  
  Verification Is the Silent Killer
&lt;/h2&gt;

&lt;p&gt;Here are the average scores across all five pillars:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pillar&lt;/th&gt;
&lt;th&gt;Avg Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Coding&lt;/td&gt;
&lt;td&gt;6.94&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Algorithms&lt;/td&gt;
&lt;td&gt;6.77&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Problem Solving&lt;/td&gt;
&lt;td&gt;6.26&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Communication&lt;/td&gt;
&lt;td&gt;5.87&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Verification&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5.65&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Verification (testing your own code, walking through examples, catching edge cases) is the weakest pillar across the board.&lt;/p&gt;

&lt;p&gt;Most candidates write code that works and then stop. They don't trace through examples. They don't check boundary conditions. They don't walk through their logic before hitting "Run."&lt;/p&gt;

&lt;p&gt;The good news? &lt;strong&gt;Verification is the easiest pillar to improve.&lt;/strong&gt; You don't need to learn a new algorithm. Just build the habit of saying &lt;em&gt;"Let me trace through this with a concrete example"&lt;/em&gt; before you call it done.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Biggest Gap Between Hire and No Hire Isn't Coding
&lt;/h2&gt;

&lt;p&gt;When we split the data by outcome, verification isn't just the weakest pillar. It has the &lt;strong&gt;widest gap&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pillar&lt;/th&gt;
&lt;th&gt;Hire Avg&lt;/th&gt;
&lt;th&gt;No Hire Avg&lt;/th&gt;
&lt;th&gt;Gap&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Verification&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;7.06&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4.36&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2.70&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Algorithms&lt;/td&gt;
&lt;td&gt;8.12&lt;/td&gt;
&lt;td&gt;5.82&lt;/td&gt;
&lt;td&gt;2.30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coding&lt;/td&gt;
&lt;td&gt;8.12&lt;/td&gt;
&lt;td&gt;6.27&lt;/td&gt;
&lt;td&gt;1.85&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Communication&lt;/td&gt;
&lt;td&gt;6.94&lt;/td&gt;
&lt;td&gt;5.18&lt;/td&gt;
&lt;td&gt;1.76&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Problem Solving&lt;/td&gt;
&lt;td&gt;7.25&lt;/td&gt;
&lt;td&gt;5.73&lt;/td&gt;
&lt;td&gt;1.52&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A 2.7-point gap on a 10-point scale. Candidates who get the Hire signal score nearly &lt;strong&gt;twice as high&lt;/strong&gt; on verification.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The single biggest differentiator between a Hire and a No Hire is whether the candidate tests their own work.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Writing correct code is table stakes. Everyone studies LeetCode. Proactively tracing through your solution, spotting edge cases, and catching your own bugs before being asked? That's what most people skip.&lt;/p&gt;




&lt;h2&gt;
  
  
  No Hires Spend More Time Coding, Less Time Testing
&lt;/h2&gt;

&lt;p&gt;We looked at how candidates actually spend their time during the interview:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Hire vs No Hire&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Time spent coding&lt;/td&gt;
&lt;td&gt;No Hires spend &lt;strong&gt;2.4x more&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time spent testing&lt;/td&gt;
&lt;td&gt;Hires spend &lt;strong&gt;68% more&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;No Hire candidates spend 2.4x more time coding, and 40% less time testing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pattern is consistent: No Hire candidates jump into code, get stuck, iterate, get stuck again, and run out of time before testing. Hire candidates arrive at working code faster and spend that saved time on verification.&lt;/p&gt;

&lt;p&gt;They're not faster coders. They plan better.&lt;/p&gt;




&lt;h2&gt;
  
  
  More Code ≠ Better Code
&lt;/h2&gt;

&lt;p&gt;No Hire candidates consistently write &lt;strong&gt;~15% more code&lt;/strong&gt; than Hires. Longer solutions are messier, have more edge cases to miss, and are harder to trace through.&lt;/p&gt;

&lt;p&gt;Clean, concise code is itself a form of verification. Less surface area, fewer bugs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Finishing Is a Skill
&lt;/h2&gt;

&lt;p&gt;A lot of candidates abandon interviews partway through. They get stuck, or the time pressure gets uncomfortable, and they quit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finishing a full interview is a skill that requires practice.&lt;/strong&gt; If you regularly quit mock interviews midway, the interview itself, not just the algorithm, is what you need to work on.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three Data-Backed Changes to Make
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Practice the Dry Run
&lt;/h3&gt;

&lt;p&gt;After writing your solution, trace through it with a specific input. Say the variable values out loud. Check the boundaries. This one habit is the biggest differentiator we found.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Let me walk through this with &lt;code&gt;[2, 7, 11, 15]&lt;/code&gt; and target &lt;code&gt;9&lt;/code&gt;..."&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Design Before You Code
&lt;/h3&gt;

&lt;p&gt;Hire candidates get to coding faster, not because they skip design, but because they're efficient about it. State your approach in 2-3 sentences, confirm the complexity, and start writing. Don't wait for the perfect plan.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Write Less, Not More
&lt;/h3&gt;

&lt;p&gt;If your solution is getting long, step back. Long solutions have more bugs, take longer to debug, and are harder to verify. A clean 30-line Python solution is almost always better than a tangled 50-line one.&lt;/p&gt;




&lt;h2&gt;
  
  
  Methodology
&lt;/h2&gt;

&lt;p&gt;All data is fully anonymized. No individual sessions, users, or identifying information were used. Analysis was performed on aggregate metrics only.&lt;/p&gt;




&lt;p&gt;The thing that separates Hires from No Hires isn't algorithm knowledge or coding speed. It's whether you test your own work. And that's the easiest thing to practice.&lt;/p&gt;

&lt;p&gt;If you want to try it yourself, &lt;a href="https://intervu.dev" rel="noopener noreferrer"&gt;intervu.dev&lt;/a&gt; runs AI mock interviews with signal-based feedback across all five pillars. You can start from the &lt;a href="https://intervu.dev/blog/grind-75-practice-pathway/" rel="noopener noreferrer"&gt;Grind 75 pathway&lt;/a&gt; or practice any LeetCode problem as a &lt;a href="https://intervu.dev/blog/practice-any-leetcode-problem/" rel="noopener noreferrer"&gt;full mock interview&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>career</category>
      <category>interview</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Killing Voice AI Lag: The Pre-Warming Trick</title>
      <dc:creator>ANBMZ L</dc:creator>
      <pubDate>Thu, 26 Mar 2026 00:03:24 +0000</pubDate>
      <link>https://dev.to/anbmz_llc_1253b3cd322ff8e/killing-voice-ai-lag-the-pre-warming-trick-13j9</link>
      <guid>https://dev.to/anbmz_llc_1253b3cd322ff8e/killing-voice-ai-lag-the-pre-warming-trick-13j9</guid>
      <description>&lt;p&gt;When I was building &lt;a href="https://intervu.dev" rel="noopener noreferrer"&gt;intervu.dev&lt;/a&gt;, an AI mock coding interviewer that conducts full voice interviews in the browser, one of the most annoying problems I ran into was a noticeable gap between when the AI finished speaking and when the microphone actually went live for the candidate to respond.&lt;/p&gt;

&lt;p&gt;The AI would finish its sentence, and then there'd be this dead pause of around 850ms before the mic activated. In a real interview, that kind of delay feels broken. It kills the conversational flow and makes the whole thing feel like a chatbot, not an interviewer.&lt;/p&gt;

&lt;p&gt;Here's what was causing it and how pre-warming the WebSocket connection during TTS playback got it down to under 400ms.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;The turn-taking loop in intervu.dev works like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The AI generates a response and sends it to TTS&lt;/li&gt;
&lt;li&gt;TTS audio streams back and plays in the browser&lt;/li&gt;
&lt;li&gt;Once TTS finishes, the mic WebSocket connection is opened and the candidate can speak&lt;/li&gt;
&lt;li&gt;Audio is streamed to the backend over that WebSocket, transcribed in real-time, and the turn continues&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The issue was in step 3. Opening a fresh WebSocket connection after TTS finishes takes time. DNS resolution, TCP handshake, the WebSocket upgrade handshake. On a typical connection that overhead was sitting around 800-900ms. Most of the time it was fine in testing, but on slower connections or under load, it was noticeably bad.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fix: pre-warm the connection during TTS playback
&lt;/h2&gt;

&lt;p&gt;The insight is simple: TTS playback gives you a natural window of time where the user is listening and not yet expected to speak. That window is completely idle from a WebSocket perspective. Instead of waiting until TTS finishes to open the mic connection, you can open it during playback and have it ready to go the moment the AI stops speaking.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// When TTS playback starts, immediately begin opening the mic WebSocket&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;onTTSPlaybackStart&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;prewarmMicConnection&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;prewarmMicConnection&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Open the connection early but don't activate the mic yet&lt;/span&gt;
  &lt;span class="nx"&gt;micSocket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;WebSocket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;MIC_WEBSOCKET_URL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nx"&gt;micSocket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onopen&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;micConnectionReady&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="nx"&gt;micSocket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onerror&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Fall back to opening it on-demand if pre-warm fails&lt;/span&gt;
    &lt;span class="nx"&gt;micConnectionReady&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// When TTS finishes playing&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;onTTSPlaybackEnd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;micConnectionReady&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;micSocket&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;readyState&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;WebSocket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OPEN&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Connection already open, activate mic immediately&lt;/span&gt;
    &lt;span class="nf"&gt;activateMicrophone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;micSocket&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Pre-warm didn't complete in time, open on-demand&lt;/span&gt;
    &lt;span class="nf"&gt;openMicConnectionAndActivate&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pre-warm happens in the background while audio is playing. By the time TTS finishes, even if the AI only speaks for a second or two, the WebSocket handshake is almost always done.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers
&lt;/h2&gt;

&lt;p&gt;Before: ~850ms gap between TTS end and mic activation, measured as time between the TTS &lt;code&gt;ended&lt;/code&gt; event firing and the first audio chunk arriving at the backend.&lt;/p&gt;

&lt;p&gt;After: under 400ms consistently, and closer to 50-80ms on good connections where the pre-warm had plenty of time to complete.&lt;/p&gt;

&lt;p&gt;The 400ms worst case comes from the fallback path. If the AI says something very short (under about 1 second), TTS can finish before the pre-warm completes and we fall back to opening on-demand. For anything longer than a second of speech, the connection is reliably ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  A few things worth knowing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Close the connection if the turn gets interrupted.&lt;/strong&gt; If the user clicks stop or the interview ends mid-TTS, you need to close the pre-warmed socket cleanly. Leaving stale open connections is the obvious failure mode here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One connection per turn.&lt;/strong&gt; Don't reuse the pre-warmed connection across multiple turns. Open a fresh one each time TTS starts. This avoids state leakage and keeps the server-side logic simple.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server-side idle timeout.&lt;/strong&gt; Your backend will see a WebSocket connection open and then sit idle for a few seconds before the client sends any audio. Make sure your idle timeout is long enough to survive that window. I set mine to 30 seconds, which is comfortably longer than any realistic TTS utterance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It degrades gracefully.&lt;/strong&gt; The fallback to on-demand opening means users on very slow connections still get a working experience, just with slightly more latency. The pre-warm is purely an optimization, not a hard dependency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters for voice AI apps
&lt;/h2&gt;

&lt;p&gt;Any application that alternates between AI speech output and user speech input has this same dead-time window during playback. The pattern generalizes well: use the playback window to pre-warm whatever connection or resource you need for the next user action. In intervu.dev's case it's a WebSocket for audio streaming, but the same idea applies to pre-fetching context, warming up a transcription session, or pre-loading the next state in a conversational flow.&lt;/p&gt;

&lt;p&gt;If you're building anything with voice turn-taking and you're seeing a gap at the handoff point, this is almost certainly part of what's causing it.&lt;/p&gt;




&lt;p&gt;I'm building intervu.dev as a solo project, an AI that conducts real FAANG-style mock coding interviews in the browser, voice and all. If you're curious about the broader architecture (Docker-in-Docker for code sandboxing, LLM prompt state machines, real-time STT), I wrote about the full build &lt;a href="https://intervu.dev/blog/building-an-ai-mock-interviewer/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>interview</category>
    </item>
    <item>
      <title>How I cut mic activation latency in half by pre-warming WebSocket connections during TTS playback</title>
      <dc:creator>ANBMZ L</dc:creator>
      <pubDate>Wed, 25 Mar 2026 23:55:43 +0000</pubDate>
      <link>https://dev.to/anbmz_llc_1253b3cd322ff8e/how-i-cut-mic-activation-latency-in-half-by-pre-warming-websocket-connections-during-tts-playback-3k84</link>
      <guid>https://dev.to/anbmz_llc_1253b3cd322ff8e/how-i-cut-mic-activation-latency-in-half-by-pre-warming-websocket-connections-during-tts-playback-3k84</guid>
      <description>&lt;p&gt;When I was building intervu.dev - an AI interviewing app - I found that the latency between the AI finishing speaking (TTS) and the mic activating to capture the user’s response was too high.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;This was because the TTS and STT WebSocket connections were opened sequentially.

I cut this latency in half by opening the STT WebSocket connection as soon as the TTS WebSocket connection was established and the first audio chunk was received.

This “pre-warming” means the STT connection is ready to go the instant the TTS finishes.

I wrote about the full build here.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>ai</category>
      <category>websockets</category>
    </item>
  </channel>
</rss>
