<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arden Talbot</title>
    <description>The latest articles on DEV Community by Arden Talbot (@arden_talbot_446643672a6f).</description>
    <link>https://dev.to/arden_talbot_446643672a6f</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arden_talbot_446643672a6f"/>
    <language>en</language>
    <item>
      <title>How I hit 375ms Voice-to-Voice latency by ditching OpenAI for Bare Metal NVIDIA Blackwells</title>
      <dc:creator>Arden Talbot</dc:creator>
      <pubDate>Mon, 16 Feb 2026 13:36:06 +0000</pubDate>
      <link>https://dev.to/arden_talbot_446643672a6f/how-i-hit-375ms-voice-to-voice-latency-by-ditching-openai-for-bare-metal-nvidia-blackwells-47d3</link>
      <guid>https://dev.to/arden_talbot_446643672a6f/how-i-hit-375ms-voice-to-voice-latency-by-ditching-openai-for-bare-metal-nvidia-blackwells-47d3</guid>
      <description>&lt;p&gt;The "Wrapper" Trap: I run an AI automation agency for healthcare clients. For the last year, I've been fighting a losing battle against latency.&lt;/p&gt;

&lt;p&gt;Every time I built a Voice Agent using the standard stack (Twilio $\to$ Vapi/Retell $\to$ GPT-4o $\to$ ElevenLabs), I hit a hard wall.Latency: 800ms - 1200ms round trip.The "Feel": It felt like a walkie-talkie. Users would interrupt the bot, and the bot would keep talking for a second before realizing it.The Cost: $0.10/min + $1,000/mo for a HIPAA BAA.&lt;/p&gt;

&lt;p&gt;I realized that network hops were the killer. Every time the audio left the server to go to OpenAI (Intelligence) or ElevenLabs (TTS), I was losing 200ms. So, I did the "irrational" thing: I bought my own hardware.The Bare Metal StackI moved everything to a dedicated NVIDIA Blackwell cluster. &lt;/p&gt;

&lt;p&gt;The goal was simple: Zero Network Hops. The Audio, the Brain (LLM), and the Mouth (TTS) all live on the same GPU VRAM.Here is the architecture that got us to 375ms: Ingestion: Twilio Stream $\to$ Custom Rust WebSocket Server (Tokio-based).ASR: nemotron (running locally).The Brain: Nemotron-4 (4-bit quantized).Why Nemotron? It follows conversational instructions better than Llama-3 and fits nicely in memory.The Mouth: Kokoro-82M.Why Kokoro? It’s tiny (82M params) but sounds better than models 10x its size. Because it's so small, we can keep it "hot" in VRAM right next to the LLM.&lt;/p&gt;

&lt;p&gt;The "Zero Retention" Architecture (HIPAA)&lt;br&gt;
Since I own the metal, I could also solve the compliance issue at the kernel level.&lt;/p&gt;

&lt;p&gt;Healthcare clients require HIPAA compliance, which usually means expensive encrypted storage and audit logs. I decided to go the other way: Don't store anything.&lt;/p&gt;

&lt;p&gt;We configured the Linux kernel to run in a "volatile-only" mode: # Disable swap to prevent RAM from touching the disk&lt;br&gt;
sudo swapoff -a&lt;br&gt;
sudo sysctl vm.swappiness=0&lt;/p&gt;

&lt;h1&gt;
  
  
  Mount logs to RAM
&lt;/h1&gt;

&lt;p&gt;mount -t tmpfs -o size=512m tmpfs /var/log/voquii&lt;/p&gt;

&lt;p&gt;By processing the entire call in RAM and flushing it immediately after the WebSocket closes, we achieve "Zero Data Retention."&lt;/p&gt;

&lt;p&gt;No recordings on disk.&lt;/p&gt;

&lt;p&gt;No transcripts in DB.&lt;/p&gt;

&lt;p&gt;Result: I can sign BAAs for free because my liability surface area is effectively zero.&lt;/p&gt;

&lt;p&gt;The Results&lt;br&gt;
Time-to-Speech: ~375ms (Consistently).&lt;/p&gt;

&lt;p&gt;Cost: Flat rate (GPU cost), no per-minute tokens.&lt;/p&gt;

&lt;p&gt;Experience: It feels like talking to a human on a slightly bad cell connection, rather than a robot.&lt;/p&gt;

&lt;p&gt;Try the Demo&lt;br&gt;
We just launched the beta on Product Hunt today to stress-test the cluster.&lt;/p&gt;

&lt;p&gt;If you want to experience the latency (or roast my Rust implementation), check it out here:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://www.producthunt.com/products/voquii?utm_source=other&amp;amp;utm_medium=social" rel="noopener noreferrer"&gt;https://www.producthunt.com/products/voquii?utm_source=other&amp;amp;utm_medium=social&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m hanging out in the comments all day answering questions about the bare-metal setup and how we handle the audio buffering!&lt;/p&gt;

</description>
      <category>performance</category>
      <category>ai</category>
      <category>rust</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
