DEV Community

Cover image for Why buffer size matters in networks
Alan Chen
Alan Chen

Posted on

Why buffer size matters in networks

Buffering is a mechanism used in computer networks to manage the flow at which packets travel in the network. It is essential because links in the network can only hold a certain amount of data at a time and it can only transfer data at a certain speed. If there is more data in the link than the link can handle, performance issues may arise such as slower delivery of packets or packets failing to reach the destination (packet loss). As a result, buffers are implemented to avoid network congestion. When packets come from other sources and want to enter the link, it will first enter the buffer. Packets will wait until the link is available to transfer more packets. There are many ways to decide which packet in the buffer is allowed to enter the link first; one common example being FCFS (first come first serve).

Bufferbloat occurs when an excessive amount of packets come into the buffer. Due to the large volume of packets in the buffer, packets need to wait longer and the buffer itself needs to manage a higher load. This causes many packets to be sent into the buffer as chunks which is an unstable flow and prone to packet loss.

Increased latency and chunky flows can create a poor experience for any users that require a steady stream of network flow. Bufferbloat can cause choppy voice calls, stuttering video streams, as well as delayed or lost animations in online video games.

To find out whether or not buffer size can improve the bufferbloat problem, a virtual network simulation was constructed using Mininet. The topology of the network consists of two end hosts which are connected by a router. The two hosts communicate with one another in three ways:

  • a long lived TCP connection
  • one host pinging the other host repeatedly
  • one host downloads a webpage on repeated intervals from the other host as if it were a webserver

These events happen simultaneously over a 60 second timeframe and the simulation was done 3 times with three different buffer sizes of the router: 5, 20, 100 packets. Here are the average webpage download times in the simulation:

Buffer size: 5

  • Average: 1.557167
  • Standard Deviation: 1.481145n

Buffer size: 20

  • Average: 0.909333
  • Standard Deviation: 0.459083n

Buffer size: 100

  • Average: 1.255200
  • Standard Deviation: 0.705861n

Among the three different queue sizes, queue size 5 has the highest average and standard deviation. Having too small of a queue means that the queue is easily filled up. As a result, packets may be dropped due to full capacity of the queue or the sender may have to delay sending packets. For TCP connections, dropped packets cause the sender to retransmit the packet which further increases congestion. It is surprising to note that the standard deviation is quite high, despite having a small buffer size. This is because some webpages do not reach the buffer and require more time for retransmissions, while the webpages that successfully reach the buffer do not stay in the buffer for long (due to small buffer size) and are transmitted very quickly to their destination.

For queue size 100, there is an improvement in download times compared to queue size 5; however, the average and standard deviation is still quite high. Although packets are less likely to be dropped due to having a larger buffer, packets can stay in the buffer for a long time during busy flows. In busy flows, packets in the larger buffers need to wait for other packets to be processed before it is their turn, so there is more waiting time in general for every packet in the buffer. This goes to show some packet loss is required in order to maintain a healthy flow, so that the rest of the packets do not get stuck waiting in the buffer for too long.

Queue size 20 has by far the best performance out of the three buffer sizes. The buffer size is large enough to avoid an excessive amount of packet loss, but small enough to avoid having too many packets waiting in the queue during high loads. This is a better balance between the perks of having a queue that is too small or a queue that is too large, as discussed previously.

In conclusion, bufferbloat can be avoided by changing the buffer size to find a good balance between wait times and the amount of packet loss. We do not want a large amount of packet losses but including too many packets in a queue can increase the overall delay for the packets going through that queue during high loads. Additionally, it is important to find an effective algorithm to manage queues as well as deciding which packets should be prioritized during busy flows. Sometimes simple solutions like FCFS are not appropriate for some flow patterns and there are better queue management systems that would allow a smoother traffic flow. Bufferbloat is a common issue that occurs in not just computer networks, but also any system that implements buffers to control flow of data. So it is important to know that adjusting buffer size is a major factor when optimizing for the best flow in a network.

Top comments (0)