DEV Community

Muhammed Shafin P
Muhammed Shafin P

Posted on

NDM-TCP vs Cubic vs Reno vs BBR: Extreme Network Stress Test (40-Second Duration)

GitHub Repository: https://github.com/hejhdiss/lkm-ndm-tcp

Previous BBR Comparison: NDM-TCP vs BBR: Performance Comparison

Introduction

This article presents an extended-duration comparison test of four TCP congestion control algorithms under extreme network conditions. Unlike previous 20-second tests, this evaluation ran for 40 seconds to observe longer-term behavior and stability patterns. The test was conducted in a simulated environment using localhost with artificially created network constraints designed to stress-test each algorithm's adaptive capabilities.

BBR Version Note: As stated in the previous article, the BBR implementation used is assumed to be BBR v1 (the source code does not contain explicit version information).

Test Environment

System Configuration:

  • OS: Xubuntu 24.04
  • Virtualization: VMware 17
  • Kernel: Linux 6.11.0
  • Interface: localhost (127.0.0.1)
  • Test Tool: iperf3
  • Test Duration: 40 seconds (extended from previous 20-second tests)

Network Conditions: Extreme Stress Configuration

This test used significantly more challenging network conditions than previous tests to evaluate algorithm behavior under stress:

sudo tc qdisc add dev lo root handle 1: htb default 1 r2q 50
sudo tc class add dev lo parent 1: classid 1:1 htb rate 20mbit ceil 20mbit
sudo tc qdisc add dev lo parent 1:1 handle 10: netem delay 20ms 50ms distribution normal loss 0.5% 0.2% duplicate 0.1% reorder 0.1% 50%
Enter fullscreen mode Exit fullscreen mode

Network Parameters:

  • Bandwidth Limit: 20 Mbps (using HTB - Hierarchical Token Bucket)
  • Base Delay: 20ms
  • Delay Variation: ±50ms (normal distribution) - very high jitter
  • Packet Loss: 0.5% (with 0.2% correlation)
  • Packet Duplication: 0.1%
  • Packet Reordering: 0.1% (50% correlation)

Why These Conditions Are Extreme:

  • High jitter (±50ms): Simulates highly unstable network conditions
  • Correlated loss: Mimics burst loss patterns
  • Packet duplication and reordering: Adds additional chaos to simulate real-world network issues
  • 20 Mbps limit: Lower than previous 50 Mbps tests, creating more congestion pressure
  • 40-second duration: Longer observation period reveals sustained behavior patterns

Complete Test Results

NDM-TCP Standard v1.0 Results

Test Command: iperf3 -c localhost -t 40 -C ndm_tcp
Enter fullscreen mode Exit fullscreen mode
Interval (sec) Transfer (MB) Bitrate (Mbps) Retransmissions Cwnd
0.00-1.00 5.12 42.9 5 512 KB
1.00-2.00 1.50 12.6 7 128 KB
2.00-3.00 1.50 12.6 7 128 KB
3.00-4.00 1.50 12.6 2 128 KB
4.00-5.00 2.75 23.1 2 384 KB
5.00-6.00 3.00 25.2 2 192 KB
6.00-7.00 1.38 11.5 3 256 KB
7.00-8.00 2.75 23.1 2 256 KB
8.00-9.00 1.38 11.5 2 448 KB
9.00-10.00 2.75 23.1 4 512 KB
10.00-11.00 1.62 13.6 1 703 KB
11.00-12.00 3.00 25.2 1 831 KB
12.00-13.00 1.38 11.5 1 895 KB
13.00-14.00 1.62 13.6 0 1,023 KB
14.00-15.00 3.25 27.3 0 1.12 MB
15.00-16.00 1.50 12.6 0 1.19 MB
16.00-17.00 3.25 27.3 2 1.25 MB
17.00-18.00 1.62 13.6 2 1.31 MB
18.00-19.00 2.88 24.1 0 1.37 MB
19.00-20.00 3.25 27.3 0 1.44 MB
20.00-21.00 1.38 11.5 0 1.50 MB
21.00-22.00 3.12 26.2 1 1.56 MB
22.00-23.00 1.50 12.6 0 1.62 MB
23.00-24.00 1.62 13.6 2 1.69 MB
24.00-25.00 3.25 27.3 0 1.75 MB
25.00-26.00 2.00 16.8 1 1.75 MB
26.00-27.00 1.62 13.6 0 1.81 MB
27.00-28.00 3.50 29.4 1 1.81 MB
28.00-29.00 1.50 12.6 3 1.81 MB
29.00-30.00 3.12 26.2 1 1.81 MB
30.00-31.00 2.25 18.9 2 1.81 MB
31.00-32.00 1.62 13.6 2 1.44 MB
32.00-33.00 3.38 28.3 1 639 KB
33.00-34.00 1.88 15.7 2 384 KB
34.00-35.00 1.62 13.6 1 895 KB
35.00-36.00 3.00 25.2 1 959 KB
36.00-37.00 1.75 14.7 0 1.06 MB
37.00-38.00 3.38 28.3 0 1.12 MB
38.00-39.00 1.50 12.6 0 1.25 MB
39.00-40.00 3.62 30.4 2 448 KB

Summary:

  • Total Transfer (Sender): 93.6 MB
  • Average Bitrate (Sender): 19.6 Mbps
  • Total Retransmissions: 63
  • Total Transfer (Receiver): 90.1 MB
  • Average Bitrate (Receiver): 18.8 Mbps
  • Test Duration: 40.16 seconds

TCP Cubic Results

Test Command: iperf3 -c localhost -t 40 -C cubic
Enter fullscreen mode Exit fullscreen mode
Interval (sec) Transfer (MB) Bitrate (Mbps) Retransmissions Cwnd
0.00-1.00 3.25 27.3 2 767 KB
1.00-2.00 3.50 29.3 8 576 KB
2.00-3.00 1.50 12.6 7 576 KB
3.00-4.00 1.62 13.6 1 639 KB
4.00-5.00 3.38 28.3 5 639 KB
5.00-6.00 1.50 12.6 1 767 KB
6.00-7.00 3.12 26.2 2 512 KB
7.00-8.00 1.38 11.5 1 576 KB
8.00-9.00 2.00 16.8 3 512 KB
9.00-10.00 2.88 24.1 1 639 KB
10.00-11.00 1.50 12.6 2 703 KB
11.00-12.00 3.00 25.2 0 767 KB
12.00-13.00 1.38 11.5 0 895 KB
13.00-14.00 3.25 27.3 0 1.31 MB
14.00-15.00 2.75 23.1 3 1.31 MB
15.00-16.00 1.38 11.5 0 1.37 MB
16.00-17.00 3.00 25.2 0 1.44 MB
17.00-18.00 1.38 11.5 0 1.87 MB
18.00-19.00 3.50 29.4 1 1.87 MB
19.00-20.00 1.75 14.7 0 1.94 MB
20.00-21.00 1.88 15.7 0 2.19 MB
21.00-22.00 2.25 18.9 0 2.62 MB
22.00-23.00 3.12 26.2 1 3.68 MB
23.00-24.00 0.00 0.00 3 3.68 MB
24.00-25.00 3.62 30.4 0 3.68 MB
25.00-26.00 1.88 15.7 38 1.31 MB
26.00-27.00 2.00 16.8 5 1,023 KB
27.00-28.00 1.62 13.6 24 2.50 MB
28.00-29.00 0.00 0.00 0 2.75 MB
29.00-30.00 2.62 22.0 2 2.06 MB
30.00-31.00 3.25 27.3 1 2.12 MB
31.00-32.00 2.12 17.8 0 2.62 MB
32.00-33.00 0.00 0.00 1 2.69 MB
33.00-34.00 3.75 31.5 0 2.69 MB
34.00-35.00 1.88 15.7 0 2.81 MB
35.00-36.00 3.25 27.3 0 3.00 MB
36.00-37.00 2.00 16.8 0 3.68 MB
37.00-38.00 4.00 33.6 2 2.75 MB
38.00-39.00 0.00 0.00 0 831 KB
39.00-40.00 2.75 23.0 30 1.31 MB

Summary:

  • Total Transfer (Sender): 89.0 MB
  • Average Bitrate (Sender): 18.7 Mbps
  • Total Retransmissions: 144
  • Total Transfer (Receiver): 86.6 MB
  • Average Bitrate (Receiver): 17.9 Mbps
  • Test Duration: 40.61 seconds

TCP Reno Results

Test Command: iperf3 -c localhost -t 40 -C reno
Enter fullscreen mode Exit fullscreen mode
Interval (sec) Transfer (MB) Bitrate (Mbps) Retransmissions Cwnd
0.00-1.00 3.25 27.2 5 256 KB
1.00-2.00 2.00 16.8 3 256 KB
2.00-3.00 2.25 18.9 4 192 KB
3.00-4.00 2.00 16.8 4 320 KB
4.00-5.00 1.88 15.7 6 384 KB
5.00-6.00 2.25 18.9 6 256 KB
6.00-7.00 2.12 17.8 5 256 KB
7.00-8.00 2.12 17.8 1 576 KB
8.00-9.00 1.88 15.7 2 703 KB
9.00-10.00 2.50 21.0 7 639 KB
10.00-11.00 2.88 24.1 0 1,023 KB
11.00-12.00 3.00 25.2 2 1.06 MB
12.00-13.00 1.38 11.5 2 1.12 MB
13.00-14.00 2.75 23.1 0 1.25 MB
14.00-15.00 1.62 13.6 2 1.31 MB
15.00-16.00 2.00 16.8 0 1.44 MB
16.00-17.00 2.88 24.1 0 1.50 MB
17.00-18.00 3.25 27.3 1 1.62 MB
18.00-19.00 1.75 14.7 1 1.69 MB
19.00-20.00 1.38 11.5 1 1.75 MB
20.00-21.00 3.62 30.4 25 639 KB
21.00-22.00 0.00 0.00 16 448 KB
22.00-23.00 1.50 12.6 1 959 KB
23.00-24.00 2.75 23.1 0 1.06 MB
24.00-25.00 1.75 14.7 1 1.19 MB
25.00-26.00 2.88 24.1 0 1.31 MB
26.00-27.00 1.38 11.5 2 1.06 MB
27.00-28.00 3.25 27.3 0 1.44 MB
28.00-29.00 3.25 27.3 2 1.50 MB
29.00-30.00 1.50 12.6 0 1.56 MB
30.00-31.00 1.50 12.6 5 1,023 KB
31.00-32.00 1.50 12.6 12 1.62 MB
32.00-33.00 2.25 18.9 15 703 KB
33.00-34.00 1.75 14.7 12 384 KB
34.00-35.00 1.62 13.6 0 639 KB
35.00-36.00 3.00 25.2 0 831 KB
36.00-37.00 1.62 13.6 0 1,023 KB
37.00-38.00 3.12 26.2 1 1.12 MB
38.00-39.00 1.62 13.6 0 1.19 MB
39.00-40.00 3.00 25.1 1 1.31 MB

Summary:

  • Total Transfer (Sender): 88.0 MB
  • Average Bitrate (Sender): 18.5 Mbps
  • Total Retransmissions: 145
  • Total Transfer (Receiver): 85.4 MB
  • Average Bitrate (Receiver): 17.8 Mbps
  • Test Duration: 40.27 seconds

BBR (Assumed v1) Results

Test Command: iperf3 -c localhost -t 40 -C bbr
Enter fullscreen mode Exit fullscreen mode
Interval (sec) Transfer (MB) Bitrate (Mbps) Retransmissions Cwnd
0.00-1.00 5.25 44.0 8 767 KB
1.00-2.00 1.75 14.7 12 1,023 KB
2.00-3.00 1.50 12.6 8 1,023 KB
3.00-4.00 1.50 12.6 16 703 KB
4.00-5.00 1.38 11.5 6 767 KB
5.00-6.00 3.00 25.2 4 1,023 KB
6.00-7.00 1.38 11.5 7 1.12 MB
7.00-8.00 1.62 13.6 17 895 KB
8.00-9.00 1.38 11.5 3 895 KB
9.00-10.00 1.62 13.6 7 1.19 MB
10.00-11.00 3.12 26.2 4 1.50 MB
11.00-12.00 1.62 13.6 8 1.62 MB
12.00-13.00 1.50 12.6 8 1.37 MB
13.00-14.00 1.62 13.6 24 1.25 MB
14.00-15.00 1.50 12.6 4 2.00 MB
15.00-16.00 1.75 14.7 2 2.12 MB
16.00-17.00 2.75 23.1 13 2.75 MB
17.00-18.00 0.00 0.00 0 3.50 MB
18.00-19.00 3.88 32.5 12 3.56 MB
19.00-20.00 0.00 0.00 0 3.50 MB
20.00-21.00 2.62 22.0 7 256 KB
21.00-22.00 3.25 27.3 2 4.56 MB
22.00-23.00 1.38 11.5 1 1.50 MB
23.00-24.00 4.38 36.7 5 895 KB
24.00-25.00 0.00 0.00 8 1.37 MB
25.00-26.00 2.62 22.0 20 1.25 MB
26.00-27.00 1.75 14.7 1 1.37 MB
27.00-28.00 1.50 12.6 1 1.25 MB
28.00-29.00 3.12 26.2 2 1.25 MB
29.00-30.00 2.88 24.1 2 1.25 MB
30.00-31.00 1.38 11.5 3 1.25 MB
31.00-32.00 3.00 25.2 1 256 KB
32.00-33.00 1.75 14.7 6 1,023 KB
33.00-34.00 1.50 12.6 0 1,023 KB
34.00-35.00 2.00 16.8 12 639 KB
35.00-36.00 3.12 26.2 4 1,023 KB
36.00-37.00 1.50 12.6 0 1,023 KB
37.00-38.00 2.88 24.1 0 1,023 KB
38.00-39.00 1.38 11.5 4 1.12 MB
39.00-40.00 3.00 25.1 1 1.12 MB

Summary:

  • Total Transfer (Sender): 83.1 MB
  • Average Bitrate (Sender): 17.4 Mbps
  • Total Retransmissions: 243
  • Total Transfer (Receiver): 79.5 MB
  • Average Bitrate (Receiver): 16.6 Mbps
  • Test Duration: 40.24 seconds

Comparative Analysis

Overall Performance Summary

Metric NDM-TCP v1.0 TCP Cubic TCP Reno BBR (Assumed v1)
Total Transfer (Sender) 93.6 MB 89.0 MB 88.0 MB 83.1 MB
Average Bitrate (Sender) 19.6 Mbps 18.7 Mbps 18.5 Mbps 17.4 Mbps
Total Retransmissions 63 144 145 243
Total Transfer (Receiver) 90.1 MB 86.6 MB 85.4 MB 79.5 MB
Average Bitrate (Receiver) 18.8 Mbps 17.9 Mbps 17.8 Mbps 16.6 Mbps
Test Duration 40.16s 40.61s 40.27s 40.24s

Key Findings Based on This Test

1. Retransmission Performance (Most Critical):

  • NDM-TCP: 63 retransmissions ✅ Best
  • TCP Cubic: 144 retransmissions (128% more than NDM-TCP)
  • TCP Reno: 145 retransmissions (130% more than NDM-TCP)
  • BBR: 243 retransmissions ❌ Worst (286% more than NDM-TCP)

Based on this 40-second test, NDM-TCP achieved the lowest retransmission count by a significant margin. BBR's retransmission count was nearly 4 times higher than NDM-TCP.

2. Throughput Performance:

  • NDM-TCP: 19.6 Mbps (best throughput)
  • Cubic: 18.7 Mbps
  • Reno: 18.5 Mbps
  • BBR: 17.4 Mbps (lowest throughput)

Based on this test, NDM-TCP achieved the highest average throughput while simultaneously having the fewest retransmissions.

3. Stability Over 40 Seconds:

NDM-TCP:

  • Consistent performance throughout the test
  • Cwnd growth steady from 128 KB to 1.81 MB plateau
  • Few intervals with zero throughput
  • Adaptive behavior visible in varied but controlled throughput

Cubic:

  • Multiple intervals with 0.00 Mbps (intervals 23, 28, 32, 38)
  • Large cwnd fluctuations (up to 3.68 MB)
  • Burst retransmissions (38 in interval 25-26, 30 in interval 39-40)
  • Less stable overall

Reno:

  • One interval with 0.00 Mbps (interval 21-22)
  • Multiple burst retransmissions (25, 16, 15, 12 in consecutive intervals around 20-33 seconds)
  • More conservative cwnd management
  • Moderate stability

BBR:

  • Multiple intervals with 0.00 Mbps (intervals 17, 19, 24)
  • Extreme cwnd fluctuations (up to 4.56 MB)
  • Consistent high retransmission rates throughout test
  • Least stable performance

Based on this test, NDM-TCP demonstrated the most stable connection over the 40-second duration.

Analysis: Why Extended Duration Matters

The 40-second test duration (vs previous 20-second tests) reveals important behavioral patterns:

1. Long-term Stability:

  • NDM-TCP maintained consistent retransmission rates throughout
  • Cubic and Reno showed increasing instability after 20 seconds
  • BBR never stabilized, with high retransmissions throughout

2. Adaptive Learning:

  • NDM-TCP's cwnd growth pattern shows learning (steady climb to plateau)
  • Traditional algorithms (Cubic/Reno) showed more reactive patterns
  • BBR's aggressive probing continued causing retransmissions

3. Network Stress Response:

  • Under extreme conditions (±50ms jitter, correlated loss, reordering), NDM-TCP's entropy-based approach helped distinguish noise from congestion
  • BBR's model-based approach struggled with chaotic conditions
  • Loss-based algorithms (Cubic/Reno) performed moderately

Important Disclaimers

1. Test Environment Limitations:

  • Localhost testing on VMware 17
  • Artificial network constraints using tc
  • Not representative of real network hardware
  • Network conditions specifically designed to be extremely challenging

2. Algorithm Suitability:

  • These extreme conditions may not favor BBR's design (optimized for bufferbloat)
  • Test environment may specifically advantage entropy-based approaches
  • Real-world performance may differ significantly

3. Statistical Significance:

  • Single 40-second test run (not multiple trials)
  • No statistical variance analysis
  • Results may vary with different test runs

4. Based on This Test Results:
All performance claims in this article are based specifically on these 40-second test results under these extreme network conditions. These are localhost simulation results, not real hardware testing.

Conclusion

Based on this specific 40-second test under extreme network conditions (±50ms jitter, correlated loss, packet duplication and reordering), NDM-TCP demonstrated:

Lowest retransmissions: 63 (vs Cubic's 144, Reno's 145, BBR's 243)

Highest throughput: 19.6 Mbps

Most stable performance: Consistent behavior over 40 seconds

Best overall performance: In this specific test scenario

However, these results are from localhost simulation with artificially created extreme conditions. Real-world validation on actual hardware with diverse network conditions is essential before making broader claims about algorithm effectiveness.

Community collaboration needed for testing on real hardware and production environments.


Disclaimer: All results are from localhost simulations with tc-created network constraints. Performance on real hardware may differ significantly. BBR is assumed to be v1 based on source code analysis.

Top comments (0)