Have you ever been in a crowded coffee shop, trying to load a website while someone else is on a video call? Your web page loads slowly, stutteringly, while their video seems to flow just fine. This isn't just bad luck—it's a fundamental characteristic of how different types of internet traffic behave. Keeping all filters and priorities aside, what will happen if TCP and UDP is set to ahead-to-head in a battle for bandwidth on a shared wireless link.
The Protagonists: TCP vs. UDP
Before we dive into the basic introduction of the two protocols:
TCP (Transmission Control Protocol) : It is most influential at the transport layer. Before any data is sent, the TCP establishes a short connection between two network endpoints via a three-way handshake. The careful, reliable postal service of the internet. It's used for web browsing, email, and file downloads. TCP guarantees that every packet arrives, in order. If a packet gets lost, it stops, waits for an acknowledgment, and resends it. It's polite and plays well with others, constantly adjusting its sending rate to avoid congesting the network (a process called congestion control).
UDP (User Datagram Protocol) : Instead of the TCP’s lengthy handshake process, however, UDP sends small, independent packets known as datagrams, without cross-referencing a connection first. The relentless, high-speed courier. It's used for live video streaming, online gaming, and VoIP calls. UDP fires packets into the network as fast as it can, with no regard for whether they arrive or in what order. There are no acknowledgments, no retries. It's a "fire-and-forget" protocol that prioritizes speed over reliability.
To be more precise, TCP and UDP are transport protocols that run on top of the Internet Protocol (IP). If IP is the road, then TCP is a careful driver who follows the rules and checks their mirrors, while UDP is a speedster who weaves through traffic with no brakes.
The Foundation: It All Runs on IP
Before we see them in action, it's crucial to understand that both TCP and UDP are not standalone entities. They are transport layer protocols, and they both rely on a common foundation: the Internet Protocol (IP).
IP (The Postal Network): IP is the fundamental, low-level protocol responsible for addressing and routing packets across the internet. It defines how to get a packet from one computer to another. However, IP is "best-effort" and unreliable. It will try its best to deliver your packet, but if a router is overloaded, the packet might be silently dropped with no notification. It doesn't care about the order of packets or their content.
TCP and UDP (The Courier Services): This is where our two protagonists come in. They operate on top of IP, acting as different types of courier services that use the postal network (IP) to deliver their payloads (your data).
In the context of our simulation, this means both the TCP data segments and UDP datagrams are packaged into IP packets. They travel the same physical network path from the senders to the Access Point and on to the receiver. The AP, acting as a router, makes forwarding decisions based on the IP headers, largely unaware of whether the packet contains a TCP segment pleading for reliability or a UDP datagram rushing through. This shared foundation is what makes their competition so direct and inevitable. They are not on different tracks; they are different types of vehicles fighting for space on the exact same road.
This experiment models coexistence of TCP and UDP traffic over a shared wireless medium to study how transport-layer protocols interact under contention. The goal is to analyze performance metrics such as throughput, latency, jitter, packet loss, and fairness when a best-effort TCP flow and a rate-controlled UDP flow compete for a single bottleneck: a Wi-Fi access point.
==== SIMULATION PARAMETERS ====
Simulation Duration: 25 seconds
TCP Algorithm:
- TcpNewReno
- TcpCubic
- TcpBbr
Bottleneck Bandwidth: 10 Mbps
Bottleneck Delay: 10 ms
Buffer Size: 1000 packets
UDP Rate: 6 Mbps
Output and Comparison
How TCP Managed Head-On with UDP ?
- NewReno & Cubic (Loss-based TCP): They competed head-on with UDP by filling buffers and waiting for packet loss to signal congestion. This led to queue buildup, hurting latency-sensitive UDP traffic. TCP achieved decent throughput but at the cost of delay and packet loss for both flows.
- BBR (Model-based TCP): Did not rely on buffer filling to probe bandwidth. Maintained low queue occupancy, allowing UDP to get a clean, low-latency path. Result: No packet loss, much lower delay, and still near-maximal throughput — making it much friendlier for real-time UDP.
Lets analyze a TCP BBR and UDP graphically based on simulation output:
- The graph shows that UDP throughput (red) quickly ramps up to ~6 Mbps and stays stable, indicating that UDP gets its share of the bandwidth almost immediately.
- TCP throughput (blue) ramps up more slowly (classic congestion window growth), then stabilizes around ~3.8–4.0 Mbps — meaning TCP backs off to avoid overwhelming UDP traffic.
- In NewReno/Cubic, this stable point was reached after filling buffers, causing higher queueing delays (600+ ms).
- In BBR, the same balance was achieved without filling the buffer, resulting in much lower delay (~54 ms) and zero packet loss.
- The throughput split (~60% UDP / ~40% TCP) shows fair resource sharing — neither flow starved, and total link utilization was >100% (due to buffer effects and slight queue overfill).
- This is good for real-world deployments where multiple traffic classes (bulk transfers + real-time) share the same bottleneck.
The results clearly show that BBR provides a much healthier coexistence between TCP and UDP by avoiding buffer bloat and keeping latency low. For networks carrying IPTV, VoIP, and bulk TCP traffic together, adopting BBR or AQM is a practical step toward ensuring fairness, efficiency, and end-user satisfaction.
Source Code : Github link
Top comments (2)
Now imagine open-air packet radio (such as SDR) where your in a simgle collision domain with the entire universe ;).
CHAOS!