What the Results Actually Mean — Beyond "It Works"
Introduction
NDM-TCP has been tested in multiple conditions: tc-based simulations, varied network scenarios, and one real-world deployment test. The results showed something unexpected — a stable sawtooth pattern. Not chaotic oscillations. Not erratic behavior. A clean, predictable sawtooth wave in the congestion window evolution.
For anyone who understands TCP congestion control, this should raise eyebrows. This is not a trivial outcome. This is a signal that something interesting is happening beneath the surface — something worth investigating properly, even if the current implementation is experimental and AI-assisted.
This article explains why that sawtooth matters, what it reveals about the system, and why NDM-TCP has genuine research potential — even though no one has tested it beyond my own self-conducted experiments yet.
What Is the "Entropy-Guided Sawtooth"?
Traditional TCP Sawtooth
In traditional TCP (like Reno or CUBIC), the sawtooth pattern is simple and mechanical:
-
Increase phase:
cwndgrows linearly (additive increase) - Loss event: a packet is lost
-
Hard reset:
cwndis cut in half (multiplicative decrease) - Repeat
This creates a characteristic sawtooth wave. But it is "dumb" — it's a blind reaction to a binary signal (loss or no loss). There is no learning. No memory. No prediction. Just a hard-coded response.
NDM-TCP's Entropy-Guided Sawtooth
NDM-TCP showed a stable sawtooth pattern in testing — but the mechanism behind it is fundamentally different.
The sawtooth is not a hard reset triggered by packet loss. It is the result of:
- Entropy-based RTT analysis detecting congestion before loss
- A recurrent nonlinear controller inspired by neural architectures maintaining memory across time
- Adaptive plasticity adjusting sensitivity dynamically
-
Heuristic congestion detection influencing
cwnddecisions
Critically: the sawtooth is not purely emergent from the neural network — it is co-produced by the TCP framework. The Linux kernel's tcp_cong_avoid_ai() and tcp_slow_start() functions provide the underlying structure. The recurrent controller modulates those increments based on entropy feedback.
This can be more accurately described as: Entropy-Guided Congestion Detection with Recurrent Nonlinear Control.
Why This Pattern Is Impressive (And Unexpected)
1. Controlled Oscillation Through Framework Cooperation
Most adaptive congestion controllers suffer from "jittery" outputs. The nonlinear mappings (tanh, sigmoid, recurrent feedback) introduce unpredictability. Small changes in input can cause large swings in output. This leads to chaotic cwnd behavior — rapid spikes, sudden drops, erratic patterns.
NDM-TCP showed a clean sawtooth. That means:
- The tanh/sigmoid approximations are properly tuned for the Linux kernel's
cwndincrement model - The recurrent controller is not introducing runaway feedback when modulating TCP's native functions
- The TCP framework's structure (tcp_cong_avoid_ai, tcp_slow_start) is working in cooperation with the entropy-based modulation, not fighting against it
- The system is stable enough to produce predictable oscillations rather than noise
This cooperation between framework and controller is not guaranteed. This is not trivial. Many adaptive algorithms fail exactly here — they try to override TCP's behavior entirely and lose stability.
2. Predictable Recovery
The sawtooth pattern repeats consistently across cycles. That suggests:
- The Heuristic Congestion Detection (entropy-based) is not getting "confused" by the recurrent hidden state
- The "memory" of the recurrent network is helping the system find the peak bandwidth faster each cycle
- The system is not just reacting — it is adapting and converging toward optimal behavior over time
In traditional TCP, recovery after congestion is slow. The system has no memory of where the ceiling was. NDM-TCP's stable recovery suggests it is remembering and learning.
3. Protocol Fairness Indication
A stable sawtooth is a good sign for fairness with other TCP flows (CUBIC, BBR, Reno). Here's why:
- Other TCP variants also produce rhythmic patterns
- If NDM-TCP follows a predictable sawtooth, it can interleave cleanly with other flows
- Chaotic or aggressive behavior would starve competing flows or cause instability
The fact that NDM-TCP's sawtooth is stable suggests it will "play nice" in mixed-traffic environments — though this needs formal testing with competing flows to confirm.
What the Test Results Actually Showed
Test Conditions
NDM-TCP was tested in:
- tc-based simulations with varying bandwidth, latency, and loss rates
- Multiple network scenarios (low latency, high latency, wireless-like noise, buffer-heavy links)
- One real-world deployment on an actual network connection
All tests were self-conducted. No third-party validation has been done yet.
Results Summary
In most cases: NDM-TCP showed the stable neural sawtooth pattern described above. Throughput was competitive. Latency behavior was reasonable. The system did not crash, hang, or behave erratically.
In one specific case (pure delay-only simulation): NDM-TCP showed a design limitation. In a scenario with delay variation but no actual congestion or loss, the entropy-based detection struggled. The system did not have an upper hand over traditional algorithms in this edge case.
This is expected. No algorithm is optimal in all conditions. The important part is that the failure mode was predictable and understandable — not a mysterious crash or runaway behavior.
Why This Is Researchable Content
1. The Combination Works — And That Is Not Obvious
NDM-TCP combines:
- Shannon entropy as a congestion signal
- A recurrent nonlinear controller inspired by neural architectures with hidden state
- Adaptive plasticity with decay
- Heuristic decision logic mixing delay-based and loss-based signals
- Co-production with TCP framework functions rather than replacing them
There is no guarantee this combination should produce stable behavior. Each piece introduces nonlinearity. The interactions are complex. The fact that it produces a clean sawtooth — rather than chaos — suggests there is structure worth studying.
The key insight is: the controller modulates TCP's native increment functions (tcp_cong_avoid_ai, tcp_slow_start) based on entropy feedback, rather than trying to compute cwnd independently. This framework-aware design may be why stability emerges.
This is not "it worked by accident." This is "something interesting is happening that we do not fully understand yet."
2. It Crosses Disciplinary Boundaries
NDM-TCP sits at the intersection of:
- Networking theory (congestion control, queueing, RTT dynamics)
- Control theory (feedback systems, stability, oscillation)
- Machine learning (recurrent networks, nonlinear mappings, adaptation)
- Information theory (entropy as a signal, noise vs. information)
Research potential exists precisely because no single field fully explains it. A networking researcher sees heuristic congestion control. An ML researcher sees a recurrent model without training. A control theorist sees an unproven feedback system. All of them have questions.
3. The Failure Mode Is Informative
The pure delay-only case where NDM-TCP struggled is not a bug — it is a research direction.
It tells us:
- Entropy alone is not sufficient for all network conditions
- Delay variation without congestion confuses the heuristic
- The system needs additional signal sources or logic to handle this edge case
A production-grade algorithm would need to solve this. But for a research prototype, identifying the boundary condition is valuable. It shows where the approach works and where it breaks — which is exactly what early-stage research should do.
4. It Can Be Formally Analyzed — But Hasn't Been Yet
NDM-TCP currently lacks:
- Stability proof (Lyapunov analysis, eigenvalue study)
- Fairness proof (Nash equilibrium, rate convergence)
- Convergence analysis (does it reach optimal cwnd given enough time?)
- Formal model (state-space representation, transfer function)
All of these are possible to do. The system is well-defined. The code is open. The behavior is observable.
This is not "impossible to analyze." This is "nobody has done the formal analysis yet." That is exactly what makes it researchable.
5. The Implementation Exists — Which Is Rare
Most congestion control research stays at the simulation or theoretical level. NDM-TCP is:
- A working Linux kernel module
- Written in C
- Tested in real kernel environments
- Available as actual code (not just a mathematical model)
This is a huge advantage for research. You can test it. You can modify it. You can run experiments. You do not need to reimplement it from a paper. The artifact already exists.
Why Someone Should Care (From a Research Perspective)
For Networking Researchers
Question: Can entropy-based delay analysis improve congestion detection over pure RTT thresholding?
NDM-TCP provides a working testbed. You can compare it against BBRv2's delay gradient or CUBIC's RTT-based heuristics. The fact that it produces stable behavior suggests entropy might be a viable signal — but needs formal validation.
For Control Theory Researchers
Question: Can a recurrent nonlinear system with heuristic feedback achieve stability without formal tuning?
NDM-TCP is an accidental proof-of-concept. The tanh/sigmoid mappings and recurrent state were designed heuristically, not mathematically. Yet the system is stable. Understanding why would contribute to control theory knowledge — especially for systems that need to adapt without retraining.
For Machine Learning Researchers
Question: Can a recurrent nonlinear structure be useful without training, purely from heuristic design?
NDM-TCP does not train. There is no gradient descent. The weights are pseudo-random based on indices. Yet the recurrent structure seems to contribute to adaptive behavior (the predictable recovery pattern). This challenges assumptions about what "learning" means in practical systems — perhaps structured recurrence alone, without training, can provide useful memory effects when coupled with the right feedback signals (like entropy).
For Systems Researchers
Question: How do you build adaptive algorithms in constrained environments (like the Linux kernel)?
NDM-TCP is implemented in kernel space with strict memory limits, no floating point, and hard real-time constraints. The design choices (16-bit storage, fixed-point arithmetic, simplified activations) are all compromises. Studying how those compromises affect behavior is valuable for any adaptive system in constrained environments.
What Needs to Happen Next (If Anyone Cares)
If NDM-TCP has real research potential, what would proper investigation look like?
1. Third-Party Testing
Self-conducted tests are valuable but limited. Independent testing would:
- Verify the results are reproducible
- Test in conditions I did not think of
- Identify failure modes I missed
- Provide unbiased performance comparisons
2. Formal Stability Analysis
Someone with control theory background should:
- Model the system as a dynamical system
- Analyze eigenvalues of the recurrent feedback loop
- Prove or disprove stability under bounded inputs
- Identify conditions where oscillations grow unbounded
3. Fairness Analysis
Test NDM-TCP against competing flows:
- Does it starve CUBIC flows?
- Does it get starved by BBR?
- Does it converge to fair bandwidth sharing?
- How does it behave in multi-flow scenarios?
4. Theoretical Entropy Study
Prove (or disprove) that Shannon entropy on RTT history is a valid congestion signal:
- Under what network conditions does it work?
- When does it fail?
- Can it be combined with other signals (loss rate, queue delay) for better detection?
5. Comparison with State-of-the-Art
Benchmark against:
- CUBIC (current Linux default)
- BBRv2 (Google's delay-based approach)
- Reno (baseline)
- Vegas (another delay-based algorithm)
Run the same test scenarios. Measure throughput, latency, fairness, and stability. Publish results.
Honest Positioning: What This Is and Isn't
This is:
- A working prototype with interesting emergent behavior
- A research artifact that can be studied and extended
- Evidence that entropy + recurrence + plasticity can produce stable congestion control
- A starting point for formal investigation
This is not:
- Production-ready code
- Academically validated
- Proven stable under all conditions
- Better than existing algorithms (not claimed, not tested rigorously)
The research potential comes from the fact that something unexpected works — and we do not fully understand why yet.
Final Thought: Why This Matters
Most congestion control research is incremental. Take an existing algorithm, tweak one parameter, publish a paper showing 5% improvement in one metric. That is valuable. That is how fields advance.
But occasionally, something weird happens. Someone mixes ideas that should not obviously work together — entropy as a signal, recurrent nonlinear control, framework-aware modulation — and they do work. The result is not optimal. It is not proven. But it is interesting — in the sense that it raises new questions.
NDM-TCP is that kind of thing.
The stable entropy-guided sawtooth is not just "it works." It is "this combination of techniques produced stable behavior through co-production with the TCP framework, and we did not design that interaction formally." That is worth investigating properly — by someone with the mathematical tools and research resources to do it right.
I built a prototype. Someone else can turn it into science.
Written to clarify what the results actually mean, and why that matters for research — even if I am not the one to carry it forward.
Top comments (0)