<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Suhett</title>
    <description>The latest articles on DEV Community by Daniel Suhett (@danielsuhett).</description>
    <link>https://dev.to/danielsuhett</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/danielsuhett"/>
    <language>en</language>
    <item>
      <title>Why UDP Beats TCP for Interactive Play</title>
      <dc:creator>Daniel Suhett</dc:creator>
      <pubDate>Sat, 21 Jun 2025 13:26:05 +0000</pubDate>
      <link>https://dev.to/danielsuhett/why-udp-beats-tcp-for-interactive-play-2a47</link>
      <guid>https://dev.to/danielsuhett/why-udp-beats-tcp-for-interactive-play-2a47</guid>
      <description>&lt;p&gt;Like many developers, I spend a lot of time at my desk, but I wanted more flexibility in where I could play my games. I began using Moonlight and Sunshine to stream my desktop gameplay to my TVs. When I first started, my games ran at 1080 p/60 fps, but I quickly learned how bitrate scales with resolution: a smooth 4K/120 fps stream can exceed 35 Mb/s, which may overwhelm a home network if it isn’t managed carefully. I’m trying to find the sweet spot between image quality and performance.&lt;/p&gt;

&lt;p&gt;This experience gave me a fresh perspective on how UDP is important and enables real-time, low-latency communication.&lt;/p&gt;




&lt;h2&gt;
  
  
  What UDP Is
&lt;/h2&gt;

&lt;p&gt;User Datagram Protocol (UDP) is a Layer-4 protocol that addresses individual processes on a host by using port numbers and provides a simple way to send and receive data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Header
&lt;/h3&gt;

&lt;p&gt;The UDP header is only 8 bytes long, keeping each datagram lightweight.&lt;/p&gt;

&lt;h3&gt;
  
  
  Communication
&lt;/h3&gt;

&lt;p&gt;UDP maintains no connection state on either host. There is no setup or teardown phase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multiplexing and Demultiplexing
&lt;/h3&gt;

&lt;p&gt;A sender can multiplex data from many applications into separate datagrams. The receiver then demultiplexes each datagram to the correct application by its destination port.&lt;/p&gt;

&lt;h3&gt;
  
  
  Low Latency
&lt;/h3&gt;

&lt;p&gt;Because UDP &lt;strong&gt;has no handshake, no ordering, and no retransmission&lt;/strong&gt;, it offers minimal latency between sender and receiver—though without delivery guarantees.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Latency Matters
&lt;/h2&gt;

&lt;p&gt;In gaming, every frame is part of a live feedback loop. Dropped or late frames can ruin the user experience. For smooth 60 FPS gameplay, a new frame must arrive every 16.66 milliseconds. At 120 FPS, this drops to 8.33 milliseconds. Professional gamers react to visual stimuli in as little as 100–150 milliseconds, so every bit of latency counts.&lt;br&gt;
TCP’s congestion control mechanisms can introduce unacceptable delays for real-time gaming. UDP allows applications to prioritize timeliness over reliability, implement custom pacing algorithms, and use forward error correction or partial redundancy instead of full retransmissions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making the Most of UDP
&lt;/h2&gt;

&lt;p&gt;To compensate for inevitable packet loss and jitter, client-side techniques like jitter buffers, frame interpolation, input prediction, and error concealment are used. These tricks help hide imperfections and keep the stream feeling responsive, even when the network isn’t perfect.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technique&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Jitter Buffer&lt;/td&gt;
&lt;td&gt;Absorbs small arrival-time variations before decoding.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Frame Interpolation&lt;/td&gt;
&lt;td&gt;Reconstructs missing motion vectors, hiding dropped frames.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Input Prediction&lt;/td&gt;
&lt;td&gt;Locally simulates next state so lost control packets don’t freeze movement.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Error Concealment&lt;/td&gt;
&lt;td&gt;Uses the last good macro-blocks or spatial interpolation when part of a frame is corrupted.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;UDP Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple and lightweight: Small headers, less bandwidth, and stateless operation.&lt;/li&gt;
&lt;li&gt;Low latency: Perfect for real-time applications like gaming and streaming.&lt;/li&gt;
&lt;li&gt;Fast: No handshake means data gets moving right away.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;UDP Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No guarantees: No acknowledgments, delivery guarantees, or flow control.&lt;/li&gt;
&lt;li&gt;Connection-less: Anyone can send data, which can be a security risk (hello, DNS floods!).&lt;/li&gt;
&lt;li&gt;No order or congestion control: Packets might arrive out of order or get lost in the shuffle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;UDP isn’t "worse TCP", it’s a specialized tool designed for speed, not reliability. For modern game streaming, that’s exactly what we need to keep our actions and the on-screen world in sync, with sub-100ms glass-to-glass latency.&lt;/p&gt;

</description>
      <category>networking</category>
      <category>udp</category>
      <category>streaming</category>
    </item>
    <item>
      <title>The Hidden Bias in AI Code Generation</title>
      <dc:creator>Daniel Suhett</dc:creator>
      <pubDate>Fri, 13 Jun 2025 14:38:19 +0000</pubDate>
      <link>https://dev.to/danielsuhett/the-hidden-bias-in-ai-code-generation-2688</link>
      <guid>https://dev.to/danielsuhett/the-hidden-bias-in-ai-code-generation-2688</guid>
      <description>&lt;p&gt;When we interact with developer communities, it’s striking how differently people experience AI. As with everything in life, it ultimately comes down to training: you soon learn about prompts, context, models, and so on. Two factors, however, shape that first impression more than anything else: &lt;strong&gt;the language you write in and the codebase you point the model at.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is about using foundation models for code. I’ll touch on benchmarks, but it’s not a deep dive into machine learning.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because code generation is essentially “predict the next token” natural and programming languages with sparse training data fare worse. Outcomes deteriorate further when the prompt language and the target language mismatch: for instance, generating F# from a Chinese prompt performs far worse than generating Java from the same prompt, which itself trails behind Java generated from an English prompt.[1]&lt;/p&gt;

&lt;p&gt;The fastest route to an AI-ready repo isn’t buying “the best” model or tool. It’s understanding &lt;strong&gt;how&lt;/strong&gt; the model will interact with your resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwh2odks116ii0yd6cmqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwh2odks116ii0yd6cmqn.png" alt="https://openai.com/index/openai-codex/" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Training Data Dominance Effect
&lt;/h2&gt;

&lt;p&gt;Large language models have favorites. Their performance mirrors their training, which is heavily skewed toward a handful of paradigms, languages, and patterns mined from public repos predominantly code that is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Written in English&lt;/li&gt;
&lt;li&gt;Imperative in style&lt;/li&gt;
&lt;li&gt;Object-oriented&lt;/li&gt;
&lt;li&gt;Using Python, JavaScript, or Java&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbiyr0rwoz431le8mrkvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbiyr0rwoz431le8mrkvm.png" alt="SWE-PolyBench: A multi-language benchmark for repository-level evaluation of coding agents" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;SWE-PolyBench: A multi-language benchmark for repository-level evaluation of coding agents [6]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Step outside and the model penalizes you. Studies show:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Switching the &lt;strong&gt;programming&lt;/strong&gt; language can cut efficiency by &lt;strong&gt;over 20%&lt;/strong&gt; under the same prompt.&lt;/li&gt;
&lt;li&gt;Switching the &lt;strong&gt;human&lt;/strong&gt; language rewriting the prompt in Chinese instead of English decrease by at least &lt;strong&gt;13%&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyyv8yory806d8q0hyx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyyv8yory806d8q0hyx8.png" alt="Exploring Multi-Lingual Bias of Large Code Models in Code Generation" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Exploring Multi-Lingual Bias of Large Code Models in Code Generation [1]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Benefits of Following the Crowd
&lt;/h2&gt;

&lt;p&gt;This bias extends to everyday frameworks and libraries. Agents perform better with mainstream tooling simply because they encounter it more often [1]. A model can emit sharp code for &lt;strong&gt;React, Express, or Django&lt;/strong&gt;. Ask for &lt;strong&gt;Ramda, Preact, or F#&lt;/strong&gt;, and the output while syntactically correct lags far behind a skilled human’s effort [2 3].&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean Code Isn’t Just for Humans
&lt;/h2&gt;

&lt;p&gt;The quality of the &lt;strong&gt;code you already have&lt;/strong&gt; directly affects AI output. Clear naming and solid docs hand the agent richer context every time it scans your repo.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compare two function declarations
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;calculateTotalPrice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;taxRate&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;calc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first, with explicit intent and nouns, is far easier for a model to reason about.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation and comments
&lt;/h3&gt;

&lt;p&gt;LLMs love comment blocks and it’s no accident. They learned that well-commented code signals quality [4]. I’m no fan of comment clutter either, yet the data are clear: docstrings and meaningful names consistently raise output quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reinforcement of Stereotypes
&lt;/h2&gt;

&lt;p&gt;Uneven training data introduces flavor-of-language “stereotypes.” A model saturated with JavaScript inevitably excels at UI logic; one steeped in Python gravitates to data-science snippets. Typical niches include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JavaScript:&lt;/strong&gt; Front-end logic, UI components, Node.js tooling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python:&lt;/strong&gt; Data science, scripting, ML notebooks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Java:&lt;/strong&gt; Enterprise-scale, strongly structured systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Leveling the Field with RAG
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Retrieval-Augmented Generation (RAG)&lt;/strong&gt; feeds fresh, domain-specific context at query time. For code, that means injecting docs, examples, and best practices &lt;strong&gt;you&lt;/strong&gt; choose regardless of mainstream popularity. Research shows RAG can boost success rates by &lt;strong&gt;13.5%&lt;/strong&gt;, especially for frameworks released after a model’s training cutoff [2].&lt;/p&gt;

&lt;p&gt;RAG isn’t a magic wand. Context windows are finite and expensive; feeding more context shifts the bottleneck from &lt;em&gt;technical&lt;/em&gt; to &lt;em&gt;financial&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Model performance varies more than most people discuss, and benchmarking remains contentious. If you’re having a rough ride, remember:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different models have different language exposure: try another one.&lt;/li&gt;
&lt;li&gt;Stay close to market-standard patterns.&lt;/li&gt;
&lt;li&gt;Keep your repo clear and well documented so your AI agent can shine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’d love to hear your thoughts drop any studies or experiences in the comments!&lt;/p&gt;




&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Exploring Multi-Lingual Bias of Large Code Models in Code Generation&lt;/strong&gt; — &lt;a href="https://arxiv.org/abs/2404.19368" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2404.19368&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CodeRAG-Bench: Can Retrieval Augment Code Generation?&lt;/strong&gt; — &lt;a href="https://arxiv.org/abs/2406.14497" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2406.14497&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Investigating the Performance of Language Models for Completing Code in Functional Programming Languages: A Haskell Case Study&lt;/strong&gt; — &lt;a href="https://dl.acm.org/doi/pdf/10.1145/3650105.3652289" rel="noopener noreferrer"&gt;https://dl.acm.org/doi/pdf/10.1145/3650105.3652289&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impact of AI-Generated Code Tools on Software Readability &amp;amp; Quality&lt;/strong&gt; — &lt;a href="https://arxiv.org/abs/2402.13280" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2402.13280&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generation&lt;/strong&gt; — &lt;a href="https://arxiv.org/pdf/2208.08227" rel="noopener noreferrer"&gt;https://arxiv.org/pdf/2208.08227&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SWE-PolyBench: A Multi-Language Benchmark for Repository-Level Evaluation of Coding Agents&lt;/strong&gt; — &lt;a href="https://arxiv.org/pdf/2504.08703" rel="noopener noreferrer"&gt;https://arxiv.org/pdf/2504.08703&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>rag</category>
      <category>ai</category>
      <category>coding</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How Google Reinvented TCP for Faster Video Streaming</title>
      <dc:creator>Daniel Suhett</dc:creator>
      <pubDate>Tue, 10 Jun 2025 12:50:52 +0000</pubDate>
      <link>https://dev.to/danielsuhett/how-google-reinvented-tcp-for-faster-video-streaming-1815</link>
      <guid>https://dev.to/danielsuhett/how-google-reinvented-tcp-for-faster-video-streaming-1815</guid>
      <description>&lt;h3&gt;
  
  
  Discover how Google replaced a 30-year-old internet rule to fix video buffering and change streaming forever with an congestion control algorithm called BBR.
&lt;/h3&gt;




&lt;p&gt;&lt;strong&gt;The Story: My Starting Point&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The other day, I was completely absorbed in a live stream, watching a gaming tournament unfold. Suddenly, the crystal-clear 1080p feed I was enjoying turned into a blurry, stuttering mess. My internet connection felt fine, and I knew I had plenty of bandwidth. "What gives?" I wondered. Why does this happen so often with livestreams? My curiosity was piqued, and I felt compelled to dig into how these things actually work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Core Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For decades, the internet has relied on a simple rule for managing traffic, embedded in a protocol called &lt;strong&gt;TCP (Transmission Control Protocol)&lt;/strong&gt;. The core problem, as I discovered, is its fundamental assumption: &lt;strong&gt;packet loss always means the network is congested&lt;/strong&gt;. When a server sends you data and a piece (a "packet") gets lost along the way, TCP assumes the pipes are full and immediately slows everything down to avoid making it worse.&lt;/p&gt;

&lt;p&gt;This worked fine in the '80s when networks were simple and slow. But today? It's a disaster for streaming. Our modern networks have huge memory buffers in their routers, which can lead to a phenomenon called &lt;strong&gt;bufferbloat&lt;/strong&gt;. Your data gets stuck in a massive queue, and even though there's no real congestion, the &lt;strong&gt;latency&lt;/strong&gt; skyrockets. On top of that, a flaky mobile connection can drop packets for dozens of reasons—none of which are actual congestion. Yet, traditional TCP sees that loss, panics, and throttles your video stream from crystal-clear HD to a pixelated mess. This old rule is the culprit behind a lot of frustrating livestreaming experiences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exploring the Solutions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Google's engineers recognized this outdated assumption was killing YouTube's performance. So, they decided to challenge it and created a revolutionary new congestion control algorithm called &lt;strong&gt;BBR (Bottleneck Bandwidth and Round-trip time)&lt;/strong&gt;. Instead of reacting blindly to packet loss, BBR proactively models the network to find the perfect sending rate for real-time video.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 1: Measure, Don't Guess
&lt;/h3&gt;

&lt;p&gt;BBR operates on a beautifully simple principle: it constantly measures two key things to understand the network's true capacity, rather than just guessing based on packet loss.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;BtlBw (Bottleneck Bandwidth):&lt;/strong&gt; This is the maximum speed the connection can &lt;em&gt;actually&lt;/em&gt; handle right now. Think of it as the true width of the pipe.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RTprop (Round-trip Propagation Time):&lt;/strong&gt; This measures the absolute minimum time it takes for a packet to travel from the server to you and back. It's the inherent delay of the journey itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By knowing these two values, BBR can precisely calculate the "Goldilocks" amount of data to send—just enough to keep the pipe full without overfilling it and creating a queue (bufferbloat).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// A simplified look at BBR's core idea:&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;calculateOptimalDataInFlight&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bottleneckBandwidth&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;roundTripTime&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;optimalData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;bottleneckBandwidth&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;roundTripTime&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;optimalData&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// BBR knows the max speed (BtlBw) and the travel time (RTprop).&lt;/span&gt;
&lt;span class="c1"&gt;// It calculates the exact amount of data needed to fill the "pipe."&lt;/span&gt;
&lt;span class="c1"&gt;// Anything more than this would just create a queue (bufferbloat)&lt;/span&gt;
&lt;span class="c1"&gt;// and increase latency.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This model-based approach means BBR doesn't have to wait for packet loss to happen. It &lt;em&gt;knows&lt;/em&gt; the network's capacity and adapts in real-time, preventing those annoying quality drops before they even occur.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 2: Probing for More Speed
&lt;/h3&gt;

&lt;p&gt;BBR doesn't just find a good rate and stick with it; it's always gently probing to see if it can go faster. It cycles through a few states, but the most important is &lt;code&gt;ProbeBW&lt;/code&gt;, where it systematically sends a little more data to see if the network's capacity has increased.&lt;/p&gt;

&lt;p&gt;This is fundamentally different from older algorithms like &lt;strong&gt;CUBIC&lt;/strong&gt;, which would aggressively fill the buffer until packets started dropping. BBR uses a technique called &lt;strong&gt;pacing&lt;/strong&gt; to send data out in a smooth, consistent stream instead of clumpy bursts. The result? When YouTube switched to BBR, they saw throughput increase by up to &lt;strong&gt;25x&lt;/strong&gt; on some routes and cut latency by &lt;strong&gt;53%&lt;/strong&gt; globally!&lt;/p&gt;




&lt;h3&gt;
  
  
  Downsides and Evolution of BBR
&lt;/h3&gt;

&lt;p&gt;While BBR offers significant performance improvements for streaming, it's not a silver bullet. I learned that early versions (BBRv1) had some drawbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Higher Retransmissions:&lt;/strong&gt; BBR can sometimes cause more packet retransmissions than traditional algorithms, especially in networks with shallow buffers. This happens because it maintains more data in the network to keep the pipe full.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fairness Issues:&lt;/strong&gt; In certain shared network environments, BBRv1 could be unfair to other TCP flows using older, loss-based congestion control algorithms, sometimes grabbing a disproportionate share of bandwidth.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CPU Usage:&lt;/strong&gt; BBR's continuous measurement and probing phases can be more CPU-intensive than simpler algorithms, which is a consideration for high-speed server environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Google addressed many of these issues with &lt;strong&gt;BBRv2&lt;/strong&gt;. This updated version improves fairness with other TCP flows, supports &lt;strong&gt;ECN (Explicit Congestion Notification)&lt;/strong&gt; signals to react better to network congestion, and enhances stability on challenging networks like Wi-Fi.&lt;/p&gt;




&lt;h3&gt;
  
  
  Where BBR Shines: Use Cases
&lt;/h3&gt;

&lt;p&gt;BBR truly shines in scenarios where traditional congestion control struggles, which often includes exactly what we care about for livestreams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Video Streaming and Real-Time Media:&lt;/strong&gt; It dramatically reduces latency and bufferbloat, leading to better video quality and fewer buffering events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mobile and Wireless Networks:&lt;/strong&gt; BBR can better handle the variable packet loss and fluctuating conditions common in cellular and Wi-Fi networks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long-Haul and Satellite Links:&lt;/strong&gt; In these scenarios, packet loss is often &lt;em&gt;not&lt;/em&gt; due to congestion, so BBR's model-based approach avoids unnecessary throttling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts: The Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The journey of BBR is a powerful lesson: always &lt;strong&gt;challenge the underlying assumptions&lt;/strong&gt; of your technology stack. For 30 years, "packet loss equals congestion" was an unquestioned truth in networking. By creating a system based on live measurement rather than a fixed rule, Google's engineers solved a problem that no amount of front-end optimization could ever fix.&lt;/p&gt;

&lt;p&gt;It reminds me that the principles of performance—whether it's in networking, databases, or even how our UI renders—are often universal. Building systems that measure and adapt to real-world conditions will almost always outperform systems that operate on outdated heuristics. This fundamental shift is now baked into next-generation protocols like &lt;strong&gt;QUIC&lt;/strong&gt;, which will power the future of the web, making our livestreams smoother and more reliable.&lt;/p&gt;




&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;[1] &lt;a href="https://cloud.google.com/blog/products/networking/tcp-bbr-congestion-control-comes-to-gcp-your-internet-just-got-faster" rel="noopener noreferrer"&gt;https://cloud.google.com/blog/products/networking/tcp-bbr-congestion-control-comes-to-gcp-your-internet-just-got-faster&lt;/a&gt;&lt;br&gt;
[2] &lt;a href="https://blog.apnic.net/2020/01/10/when-to-use-and-not-use-bbr/" rel="noopener noreferrer"&gt;https://blog.apnic.net/2020/01/10/when-to-use-and-not-use-bbr/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.reddit.com/r/firewalla/comments/12i9l97/any_downsides_to_enabling_bbr_tcp_congestion/" rel="noopener noreferrer"&gt;https://www.reddit.com/r/firewalla/comments/12i9l97/any_downsides_to_enabling_bbr_tcp_congestion/&lt;/a&gt;&lt;br&gt;
[3] &lt;a href="https://www3.cs.stonybrook.edu/%7Earunab/papers/imc19_bbr.pdf" rel="noopener noreferrer"&gt;https://www3.cs.stonybrook.edu/~arunab/papers/imc19_bbr.pdf&lt;/a&gt;&lt;br&gt;
[4] &lt;a href="https://www.thousandeyes.com/blog/path-quality-brr-future-congestion-avoidance" rel="noopener noreferrer"&gt;https://www.thousandeyes.com/blog/path-quality-brr-future-congestion-avoidance&lt;/a&gt;&lt;br&gt;
[5] &lt;a href="https://news.ycombinator.com/item?id=14298576" rel="noopener noreferrer"&gt;https://news.ycombinator.com/item?id=14298576&lt;/a&gt;&lt;br&gt;
[6] &lt;a href="https://dl.acm.org/doi/10.1145/3355369.3355579" rel="noopener noreferrer"&gt;https://dl.acm.org/doi/10.1145/3355369.3355579&lt;/a&gt;&lt;/p&gt;

</description>
      <category>networking</category>
      <category>tcp</category>
      <category>webdev</category>
      <category>streaming</category>
    </item>
  </channel>
</rss>
