In our latest round of network testing, we pushed NDM-TCP, TCP Reno, and TCP Cubic into a "near-failure" scenario. By simulating an extremely degraded network, we aimed to see which algorithm would maintain a stable connection and which would succumb to the chaos of high latency and packet loss.
The "Torture Test" Configuration
To simulate a failing satellite link or a heavily congested wireless network, we used the following tc (traffic control) parameters. Note that these specific values were randomly chosen to create an extreme network degradation scenario:
- Network Latency: 250ms (Round Trip Delay)
- Packet Loss Rate: 31% (Nearly 1 in every 3 packets dropped)
Test Results Overview
The following data was captured during a 10-second iperf3 stress test under the conditions mentioned above:
| Metric | NDM-TCP (ML Model) | TCP Cubic | TCP Reno |
|---|---|---|---|
| Total Transfer (Sender) | 2.38 MBytes | 2.38 MBytes | 3.50 MBytes |
| Total Received (Receiver) | 64.0 KBytes | 640 KBytes | 1.12 MBytes |
| Average Bitrate (Sender) | 1.99 Mbits/sec | 1.99 Mbits/sec | 2.93 Mbits/sec |
| Receiver Bitrate | 49.9 Kbits/sec | 434 Kbits/sec | 898 Kbits/sec |
| Total Retransmissions | 5 | 6 | 10 |
| Final Window (Cwnd) | 46.1 KBytes | 63.9 KBytes | 192 KBytes |
| Test Duration (Receiver) | 10.50 sec | 12.09 sec | 10.50 sec |
Detailed Interval Analysis
NDM-TCP Performance Over Time
| Interval (sec) | Transfer | Bitrate | Retr | Cwnd |
|---|---|---|---|---|
| 0.00-1.41 | 0.00 Bytes | 0.00 bits/sec | 1 | 352 KBytes |
| 1.41-2.00 | 2.38 MBytes | 34.0 Mbits/sec | 0 | 320 KBytes |
| 2.00-3.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 320 KBytes |
| 3.00-4.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 32.0 KBytes |
| 4.00-5.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 32.0 KBytes |
| 5.00-6.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 32.0 KBytes |
| 6.00-7.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 32.0 KBytes |
| 7.00-8.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 46.1 KBytes |
| 8.00-9.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 46.1 KBytes |
| 9.00-10.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 46.1 KBytes |
TCP Cubic Performance Over Time
| Interval (sec) | Transfer | Bitrate | Retr | Cwnd |
|---|---|---|---|---|
| 0.00-1.00 | 2.38 MBytes | 19.9 Mbits/sec | 0 | 320 KBytes |
| 1.00-2.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 461 KBytes |
| 2.00-3.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 512 KBytes |
| 3.00-4.00 | 0.00 Bytes | 0.00 bits/sec | 3 | 256 KBytes |
| 4.00-5.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 256 KBytes |
| 5.00-6.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 256 KBytes |
| 6.00-7.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 63.9 KBytes |
| 7.00-8.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 63.9 KBytes |
| 8.00-9.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 63.9 KBytes |
| 9.00-10.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 63.9 KBytes |
TCP Reno Performance Over Time
| Interval (sec) | Transfer | Bitrate | Retr | Cwnd |
|---|---|---|---|---|
| 0.00-1.00 | 2.38 MBytes | 19.9 Mbits/sec | 1 | 461 KBytes |
| 1.00-2.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 461 KBytes |
| 2.00-3.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 639 KBytes |
| 3.00-4.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 63.9 KBytes |
| 4.00-5.00 | 0.00 Bytes | 0.00 bits/sec | 2 | 192 KBytes |
| 5.00-6.00 | 0.00 Bytes | 0.00 bits/sec | 2 | 192 KBytes |
| 6.00-7.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 128 KBytes |
| 7.00-8.00 | 0.00 Bytes | 0.00 bits/sec | 0 | 128 KBytes |
| 8.00-9.00 | 0.00 Bytes | 0.00 bits/sec | 1 | 128 KBytes |
| 9.00-10.00 | 1.12 MBytes | 9.42 Mbits/sec | 1 | 192 KBytes |
Key Findings and Analysis
1. NDM-TCP: Intelligence through Restraint
The ML-driven NDM-TCP demonstrated a "Stability-First" philosophy. Upon recognizing the massive 31% loss, it immediately reduced its congestion window (Cwnd) to a safe floor of 32 KBytes.
- Retransmission Mastery: It recorded the lowest number of retransmissions (5).
- Strategic Silence: Instead of flooding a broken link with data that would likely be lost, NDM-TCP effectively "paused" during high-interference intervals (3.00-6.00s), showing 0.00 bits/sec.
2. TCP Reno: The Brute-Force Approach
TCP Reno behaved aggressively, pushing more data than the other two algorithms.
- The Cost of Speed: While it transferred more data, it suffered double the retransmissions (10) compared to NDM-TCP.
- Instability: Reno maintained a much larger Cwnd (192 KBytes), refusing to acknowledge the severity of the 31% loss. This "greedy" behavior causes high jitter and network noise.
3. TCP Cubic: The Inefficient Middle
TCP Cubic struggled significantly in this environment. While its sender-side bitrate matched NDM-TCP (1.99 Mbits/sec), it was less efficient at managing the window under high loss, fluctuating between 512 KBytes and 63.9 KBytes without achieving the clean stability of the ML model.
Conclusion: Different Priorities, Different Results
This test demonstrates that NDM-TCP is designed for Stability over Raw Throughput. In an environment where standard algorithms like Reno "brute-force" the connection—resulting in high error rates—NDM-TCP recognizes the futility of sending data into a "black hole."
Each algorithm made different trade-offs:
- TCP Reno prioritized throughput, achieving higher data transfer but at the cost of doubled retransmissions
- TCP Cubic attempted to balance efficiency but struggled with window management under extreme loss
- NDM-TCP prioritized connection stability, achieving the lowest retransmission count by strategically reducing transmission during high-loss periods
For mission-critical applications where a clean, low-retransmission connection is more important than speed, the ML-based NDM-TCP provides a different approach to network awareness than traditional math-based protocols.
Top comments (0)