DEV Community

Muhammed Shafin P
Muhammed Shafin P

Posted on

NDM-TCP vs Cubic vs Reno vs BBR: Pure Localhost Performance Test (No Artificial Constraints)

GitHub Repository: https://github.com/hejhdiss/lkm-ndm-tcp

Introduction

This article presents a performance comparison of four TCP congestion control algorithms tested on pure localhost loopback with no artificial network constraints. Unlike previous tests that used tc (traffic control) to simulate packet loss, delay, jitter, and bandwidth limits, this test evaluates each algorithm's raw performance capability on an unconstrained loopback interface.

Test Duration: 40 seconds (extended observation period)

BBR Version Note: The BBR implementation used is assumed to be BBR v1, as the source code does not contain explicit version information (as stated in previous articles).

Test Environment

System Configuration:

  • OS: Xubuntu 24.04
  • Virtualization: VMware 17
  • Kernel: Linux 6.11.0
  • Interface: localhost (127.0.0.1) - pure loopback
  • Test Tool: iperf3
  • Test Duration: 40 seconds
  • Network Constraints: NONE - No tc rules applied

Key Difference from Previous Tests:

  • No bandwidth limiting
  • No artificial delay or jitter
  • No packet loss simulation
  • No packet duplication or reordering
  • Pure localhost performance - testing maximum throughput capability

This test reveals each algorithm's behavior when not constrained by simulated network conditions.

Complete Test Results

NDM-TCP Standard v1.0 Results

Test Command: iperf3 -c localhost -t 40 -C ndm_tcp
Enter fullscreen mode Exit fullscreen mode
Interval (sec) Transfer (GB) Bitrate (Gbps) Retransmissions Cwnd (GB)
0.00-1.00 6.86 58.9 0 3.51
1.00-2.00 6.80 58.5 0 6.97
2.00-3.00 6.58 56.5 0 10.3
3.00-4.00 6.84 58.8 0 13.8
4.00-5.00 7.00 60.2 1 15.1
5.00-6.00 6.92 59.5 0 15.1
6.00-7.00 6.71 57.6 0 15.1
7.00-8.00 6.90 59.3 0 15.1
8.00-9.00 6.84 58.8 0 15.1
9.00-10.00 6.98 59.9 0 15.1
10.00-11.00 7.02 60.3 0 15.1
11.00-12.00 6.68 57.4 0 15.1
12.00-13.00 6.96 59.8 0 15.1
13.00-14.00 7.05 60.5 0 15.1
14.00-15.00 6.99 60.0 0 15.1
15.00-16.00 7.07 60.7 0 15.1
16.00-17.00 6.67 57.3 0 15.1
17.00-18.00 6.86 58.9 0 15.1
18.00-19.00 6.89 59.2 0 15.1
19.00-20.00 6.92 59.5 0 15.1
20.00-21.00 6.81 58.5 0 15.1
21.00-22.00 6.74 57.9 1 15.1
22.00-23.00 6.94 59.6 0 15.1
23.00-24.00 6.94 59.6 0 15.1
24.00-25.00 6.97 59.9 0 15.1
25.00-26.00 6.72 57.8 0 15.1
26.00-27.00 6.90 59.3 0 15.1
27.00-28.00 6.99 60.0 0 15.1
28.00-29.00 7.04 60.4 0 15.1
29.00-30.00 6.81 58.5 0 15.1
30.00-31.00 6.93 59.5 0 15.1
31.00-32.00 7.17 61.6 0 15.1
32.00-33.00 7.10 60.9 1 15.1
33.00-34.00 6.99 60.1 0 15.1
34.00-35.00 7.18 61.6 0 15.1
35.00-36.00 7.03 60.4 0 15.1
36.00-37.00 7.01 60.2 0 15.1
37.00-38.00 7.03 60.4 0 15.1
38.00-39.00 6.56 56.3 0 15.1
39.00-40.00 6.80 58.5 0 15.1

Summary:

  • Total Transfer (Sender): 277 GB
  • Average Bitrate (Sender): 59.4 Gbps
  • Total Retransmissions: 3
  • Total Transfer (Receiver): 277 GB
  • Average Bitrate (Receiver): 59.4 Gbps
  • Test Duration: 40.00 seconds

TCP Cubic Results

Test Command: iperf3 -c localhost -t 40 -C cubic
Enter fullscreen mode Exit fullscreen mode
Interval (sec) Transfer (GB) Bitrate (Gbps) Retransmissions Cwnd (MB)
0.00-1.00 7.07 60.6 0 1.06
1.00-2.00 6.84 58.7 0 1.06
2.00-3.00 7.31 62.8 0 1.06
3.00-4.00 6.87 59.0 0 1.06
4.00-5.00 7.21 61.9 0 1.06
5.00-6.00 7.31 62.8 1 1.06
6.00-7.00 7.16 61.5 0 1.06
7.00-8.00 6.97 59.8 3 1.06
8.00-9.00 6.96 59.8 0 1.06
9.00-10.00 6.86 59.0 0 1.06
10.00-11.00 6.89 59.1 0 1.06
11.00-12.00 6.86 59.0 0 1.06
12.00-13.00 6.91 59.4 1 1.06
13.00-14.00 6.99 60.1 1 1.06
14.00-15.00 7.14 61.3 1 1.06
15.00-16.00 6.92 59.5 4 1.06
16.00-17.00 6.93 59.6 5 1.06
17.00-18.00 6.87 58.9 0 1.06
18.00-19.00 7.01 60.2 0 1.06
19.00-20.00 7.05 60.6 0 1.06
20.00-21.00 7.04 60.5 0 1.06
21.00-22.00 7.10 61.0 0 1.06
22.00-23.00 7.01 60.2 2 1.06
23.00-24.00 6.87 59.0 0 1.06
24.00-25.00 7.00 60.1 0 1.06
25.00-26.00 6.95 59.7 0 1.06
26.00-27.00 7.07 60.7 0 1.06
27.00-28.00 6.93 59.6 0 1.06
28.00-29.00 7.38 63.4 0 1.06
29.00-30.00 7.56 64.9 0 1.06
30.00-31.00 7.57 65.1 0 1.06
31.00-32.00 7.05 60.5 0 1.06
32.00-33.00 6.66 57.3 1 1.06
33.00-34.00 7.04 60.4 0 1.06
34.00-35.00 6.81 58.6 0 1.06
35.00-36.00 7.17 61.5 1 1.06
36.00-37.00 7.68 66.0 0 1.06
37.00-38.00 7.26 62.4 0 1.06
38.00-39.00 6.80 58.4 0 1.06
39.00-40.00 6.88 59.1 0 1.06

Summary:

  • Total Transfer (Sender): 282 GB
  • Average Bitrate (Sender): 60.6 Gbps
  • Total Retransmissions: 20
  • Total Transfer (Receiver): 282 GB
  • Average Bitrate (Receiver): 60.6 Gbps
  • Test Duration: 40.00 seconds

TCP Reno Results

Test Command: iperf3 -c localhost -t 40 -C reno
Enter fullscreen mode Exit fullscreen mode
Interval (sec) Transfer (GB) Bitrate (Gbps) Retransmissions Cwnd (MB)
0.00-1.00 6.99 60.0 0 1.75
1.00-2.00 7.27 62.5 0 1.75
2.00-3.00 7.29 62.6 0 1.75
3.00-4.00 7.54 64.8 0 1.75
4.00-5.00 7.07 60.7 0 1.75
5.00-6.00 7.02 60.3 0 1.75
6.00-7.00 7.27 62.5 0 1.75
7.00-8.00 7.23 62.1 0 1.75
8.00-9.00 7.14 61.3 0 1.75
9.00-10.00 7.14 61.3 1 1.87
10.00-11.00 6.84 58.8 0 1.87
11.00-12.00 6.91 59.3 0 1.87
12.00-13.00 6.79 58.4 1 1.87
13.00-14.00 6.88 59.1 0 1.87
14.00-15.00 6.67 57.3 0 1.87
15.00-16.00 6.58 56.5 0 1.87
16.00-17.00 6.62 56.8 1 2.19
17.00-18.00 6.34 54.5 0 2.19
18.00-19.00 6.84 58.8 0 2.19
19.00-20.00 6.92 59.4 0 2.19
20.00-21.00 6.90 59.3 0 2.19
21.00-22.00 7.07 60.7 0 2.19
22.00-23.00 7.16 61.5 0 2.19
23.00-24.00 6.75 58.0 1 2.19
24.00-25.00 6.89 59.2 0 2.19
25.00-26.00 6.78 58.3 0 2.19
26.00-27.00 6.76 58.0 0 2.19
27.00-28.00 6.86 58.9 1 2.19
28.00-29.00 6.83 58.7 0 2.19
29.00-30.00 6.63 57.0 0 2.19
30.00-31.00 6.88 59.1 0 2.19
31.00-32.00 7.01 60.2 0 2.19
32.00-33.00 6.90 59.3 0 2.19
33.00-34.00 6.97 59.9 0 2.19
34.00-35.00 6.88 59.1 0 2.19
35.00-36.00 6.95 59.7 0 2.19
36.00-37.00 7.21 62.0 0 2.19
37.00-38.00 7.04 60.5 2 2.19
38.00-39.00 6.69 57.5 0 2.19
39.00-40.00 6.34 54.4 1 2.19

Summary:

  • Total Transfer (Sender): 278 GB
  • Average Bitrate (Sender): 59.6 Gbps
  • Total Retransmissions: 8
  • Total Transfer (Receiver): 278 GB
  • Average Bitrate (Receiver): 59.6 Gbps
  • Test Duration: 40.00 seconds

BBR (Assumed v1) Results

Test Command: iperf3 -c localhost -t 40 -C bbr
Enter fullscreen mode Exit fullscreen mode
Interval (sec) Transfer (GB) Bitrate (Gbps) Retransmissions Cwnd
0.00-1.00 6.81 58.4 0 1.37 MB
1.00-2.00 6.82 58.6 0 767 KB
2.00-3.00 6.93 59.5 0 895 KB
3.00-4.00 7.00 60.1 0 895 KB
4.00-5.00 6.95 59.7 0 895 KB
5.00-6.00 6.70 57.6 0 767 KB
6.00-7.00 6.83 58.7 0 895 KB
7.00-8.00 6.64 57.0 0 1,023 KB
8.00-9.00 6.63 56.9 0 895 KB
9.00-10.00 6.90 59.2 0 895 KB
10.00-11.00 6.86 59.0 1 767 KB
11.00-12.00 6.75 57.9 7 1.12 MB
12.00-13.00 6.65 57.1 1 1,023 KB
13.00-14.00 6.96 59.8 0 895 KB
14.00-15.00 6.86 58.9 0 1,023 KB
15.00-16.00 6.67 57.3 0 895 KB
16.00-17.00 6.93 59.5 0 895 KB
17.00-18.00 6.85 58.9 0 895 KB
18.00-19.00 6.65 57.1 0 767 KB
19.00-20.00 6.70 57.6 0 895 KB
20.00-21.00 6.50 55.8 0 767 KB
21.00-22.00 6.87 59.0 4 1,023 KB
22.00-23.00 6.96 59.8 0 1,023 KB
23.00-24.00 6.97 59.9 0 767 KB
24.00-25.00 6.83 58.7 0 895 KB
25.00-26.00 6.58 56.5 2 895 KB
26.00-27.00 6.67 57.3 1 1.50 MB
27.00-28.00 6.54 56.2 9 767 KB
28.00-29.00 6.73 57.8 4 895 KB
29.00-30.00 6.67 57.3 0 1,023 KB
30.00-31.00 6.36 54.6 2 895 KB
31.00-32.00 6.47 55.6 0 256 KB
32.00-33.00 6.48 55.6 2 895 KB
33.00-34.00 6.93 59.5 2 895 KB
34.00-35.00 6.77 58.2 0 895 KB
35.00-36.00 6.77 58.2 0 767 KB
36.00-37.00 6.72 57.7 1 1.12 MB
37.00-38.00 6.56 56.3 0 767 KB
38.00-39.00 6.57 56.5 1 895 KB
39.00-40.00 6.43 55.2 0 895 KB

Summary:

  • Total Transfer (Sender): 270 GB
  • Average Bitrate (Sender): 58.0 Gbps
  • Total Retransmissions: 37
  • Total Transfer (Receiver): 270 GB
  • Average Bitrate (Receiver): 58.0 Gbps
  • Test Duration: 40.00 seconds

Comparative Analysis

Overall Performance Summary

Metric NDM-TCP v1.0 TCP Cubic TCP Reno BBR (Assumed v1)
Total Transfer 277 GB 282 GB 278 GB 270 GB
Average Bitrate 59.4 Gbps 60.6 Gbps 59.6 Gbps 58.0 Gbps
Total Retransmissions 3 20 8 37
Cwnd Behavior Grows to 15.1 GB Stable 1.06 MB Stable 2.19 MB Variable (256KB-1.5MB)

Key Findings Based on This Pure Localhost Test

1. Throughput Performance:

  • TCP Cubic: 60.6 Gbps ✅ Highest throughput
  • TCP Reno: 59.6 Gbps
  • NDM-TCP: 59.4 Gbps
  • BBR: 58.0 Gbps (lowest)

In pure localhost with no constraints, Cubic achieved the highest throughput, closely followed by Reno and NDM-TCP. BBR had the lowest throughput.

2. Retransmission Performance:

  • NDM-TCP: 3 retransmissions ✅ Best (lowest)
  • TCP Reno: 8 retransmissions
  • TCP Cubic: 20 retransmissions
  • BBR: 37 retransmissions (worst)

NDM-TCP achieved dramatically fewer retransmissions even in unconstrained conditions. BBR had 12.3x more retransmissions than NDM-TCP.

3. Congestion Window Behavior:

NDM-TCP:

  • Aggressive cwnd growth: 3.51 GB → 15.1 GB plateau
  • Reaches 15.1 GB cwnd and maintains it
  • Demonstrates ability to scale to very large windows
  • Only 3 retransmissions despite massive cwnd

Cubic:

  • Conservative cwnd: Stable at 1.06 MB throughout
  • No significant window growth
  • 20 retransmissions with small window

Reno:

  • Conservative cwnd: Stable at 1.75-2.19 MB
  • Slight growth over time
  • 8 retransmissions with moderate window

BBR:

  • Highly variable cwnd: 256 KB to 1.50 MB
  • Frequent fluctuations
  • 37 retransmissions despite smaller windows

4. Stability Over 40 Seconds:

NDM-TCP:

  • Very consistent throughput (56.3-61.6 Gbps range)
  • Extremely stable cwnd after reaching 15.1 GB plateau
  • Only 3 retransmissions total across 40 seconds

Cubic:

  • Consistent throughput (57.3-66.0 Gbps range)
  • Perfectly stable cwnd (1.06 MB throughout)
  • 20 retransmissions scattered across test

Reno:

  • Consistent throughput (54.4-64.8 Gbps range)
  • Stable cwnd with minor growth
  • 8 retransmissions well-distributed

BBR:

  • Most variable throughput (54.6-60.1 Gbps range)
  • Unstable cwnd with constant fluctuations
  • 37 retransmissions (highest)

Analysis: Pure Localhost vs Constrained Testing

Key Observations:

1. Different Winner in Different Conditions:

  • Constrained tests (with tc): NDM-TCP won on throughput
  • Pure localhost: Cubic won on throughput
  • Both conditions: NDM-TCP won on retransmissions

2. NDM-TCP's Cwnd Scaling:

  • In pure localhost, NDM-TCP scaled cwnd to 15.1 GB (gigabytes!)
  • This is 14,000x larger than Cubic's 1.06 MB
  • Demonstrates NDM-TCP can be very aggressive when conditions allow
  • Still achieved lowest retransmissions despite massive window

3. Algorithm Behavior Without Constraints:

  • Cubic: Conservative, stable, good throughput
  • Reno: Conservative, predictable behavior
  • NDM-TCP: Aggressive scaling, low retransmissions
  • BBR: Struggled even without artificial constraints

4. Why Cubic Won on Throughput:

  • Cubic's conservative approach worked well in unconstrained localhost
  • Its cubic growth function well-suited to stable, high-bandwidth environments
  • NDM-TCP's adaptive behavior may add slight overhead
  • The difference is small (1.2 Gbps = 2% difference)

5. Why NDM-TCP Had Fewer Retransmissions:

  • Entropy-based detection works even without artificial noise
  • Neural network learned optimal window scaling strategy
  • Adaptive approach avoided triggering unnecessary retransmissions
  • Reached massive cwnd (15.1 GB) without causing congestion

Important Disclaimers

1. Localhost Testing Limitations:

  • Pure localhost loopback is not representative of real networks
  • No physical propagation delay
  • No real hardware packet processing
  • No actual network congestion (just buffer limits)
  • VMware virtualization adds its own characteristics

2. Cwnd Size Observations:

  • NDM-TCP's 15.1 GB cwnd is unrealistic for real networks
  • This only works on localhost with no constraints
  • Real networks would never allow such massive windows
  • Demonstrates scalability but not real-world behavior

3. Based on This Test Results:
All performance claims in this article are based specifically on this pure localhost test with no artificial constraints. This is an unconstrained localhost simulation, not real hardware testing.

4. BBR's Performance:

  • BBR is designed for real WAN connections with bufferbloat
  • Localhost testing (constrained or unconstrained) may not suit BBR's design
  • BBR assumed to be v1 based on source code analysis

Conclusion

Based on this specific pure localhost test (no tc constraints, no artificial delays/loss):

Throughput Winner:TCP Cubic (60.6 Gbps)

Retransmission Winner:NDM-TCP (3 retransmissions)

Most Aggressive: NDM-TCP (15.1 GB cwnd)

Most Conservative: Cubic (1.06 MB cwnd)

Most Unstable: BBR (37 retransmissions)

Key Takeaway: Different algorithms excel in different conditions:

  • Cubic: Best for unconstrained, stable networks
  • NDM-TCP: Best for minimizing retransmissions, scales aggressively
  • Reno: Solid, predictable performance
  • BBR: Struggled in both constrained and unconstrained localhost tests

However, localhost testing (with or without tc) does not represent real-world network behavior. Real hardware validation with diverse network conditions is essential.

Community collaboration needed for testing on actual network hardware and production environments.


Disclaimer: All results are from pure localhost loopback without any tc constraints. Performance on real network hardware may differ significantly. BBR is assumed to be v1 based on source code analysis. NDM-TCP's 15.1 GB cwnd behavior is localhost-specific and not representative of real network conditions.

Top comments (0)