DEV Community

Muhammed Shafin P
Muhammed Shafin P

Posted on

Building with AI: What I Know, What I Built, and Where I Stand

A Personal Experience with AI-Assisted System Development — Using NDM-TCP as a Case Study

An honest technical reflection — not a research paper.


Introduction

This article is a transparent look at:

  • What it actually feels like to build a real system with heavy AI assistance
  • Where AI genuinely helps, and where it quietly fails you
  • What I understand, what I built, and how honest I can be about the gap between those two things
  • Where I stand right now, and what I plan to do next

The system I built is called NDM-TCP — a Linux kernel TCP congestion control module. But this article is not really about NDM-TCP. NDM-TCP is just the thing that happened when I mixed curiosity, AI assistance, limited time, and a willingness to experiment. The real subject is what that process taught me about using AI as a building tool — what it gives you, what it takes from you, and how to use it without losing the thing that matters most: your own understanding.

This is not a research paper. It is an honest reflection written by a teenager who built something real and wants to be clear about how.


What I Know and Where I Stand

My Background

I am not at advanced engineering level. I am not yet in college. I am still a teenager.

But I understand things at an abstract and conceptual level — and I have built real things with that understanding. My Python and C knowledge can be considered intermediate. I have already built functional, real-world applications without AI assistance — so my experience is genuine, not just theoretical exposure.

I have self-taught some basics of x86/x64 assembly, working from documentation. On the networking side, I have self-studied ARP, VLANs, and STP (Spanning Tree Protocol), which gave me a practical mental model of how networks actually function below the application layer.

My interest in all of this came from curiosity and self-motivation, not a curriculum.


What I Know Clearly (About Congestion Control)

I understand:

  • Basic concept of cwnd (congestion window)
  • Relationship between throughput and RTT
  • Basic differential-style modeling such as:
  dW/dt = 1/R − (W/2)·p
Enter fullscreen mode Exit fullscreen mode
  • AIMD (Additive Increase, Multiplicative Decrease)
  • Queue growth and buffer behavior
  • RTT as a delay signal
  • Difference between delay-based and loss-based control
  • Bufferbloat concept
  • Basic idea of AQM (like RED/CoDel, conceptually)
  • That congestion is a feedback system

I understand these at a conceptual and intuitive level — not at the level of deep mathematical proof.


What I Do NOT Know (Yet)

  • Engineering-level linear algebra
  • Eigenvalues and eigenvectors in depth
  • Formal control theory
  • Rigorous stability proofs
  • Advanced queueing theory
  • Formal ML theory

I am working mostly from abstraction and intuition, not full mathematical rigor. I know that. Being transparent about it is the point of this article.


What I Built: NDM-TCP

NDM-TCP is a Linux kernel congestion control module, implemented in C.

It combines:

  • Entropy-based RTT analysis
  • Adaptive congestion window logic
  • A small recurrent neural-style structure
  • Plasticity decay mechanism
  • Heuristic congestion detection

How It Works (Technically)

Entropy-Based RTT Analysis: The module stores a small window of RTT samples and computes Shannon entropy over the distribution. Low entropy → stable delay → likely real congestion. High entropy → noisy delay → possibly wireless fluctuation. This is a hypothesis. Not formally proven.

Adaptive cwnd Behavior: Slows growth when entropy suggests congestion, becomes more aggressive when entropy suggests noise, adjusts reduction factor in the ssthresh phase. Mixes delay-based and loss-based thinking.

Recurrent Neural Structure: Includes a hidden state array, recurrent update, tanh approximation, and sigmoid output mapping. This introduces memory across time and nonlinear feedback. Not mathematically proven stable. This is experimental.

Plasticity Concept: A variable that increases during congestion and decays slowly over time — simulating adaptive sensitivity. Heuristic-based, not derived from control theory.

What Is Proven vs. Experimental

Proven components: AIMD concepts, cwnd mechanics, TCP congestion avoidance principles, RTT measurement logic, Linux TCP integration model.

Experimental: entropy as congestion classifier, recurrent hidden state influence, plasticity-based adaptation, neural-style nonlinear mapping. None of these are backed by formal proofs. They are engineering experiments.

Why It Has Unpredictable Behavior

Recurrent systems create dynamic feedback loops. Nonlinear functions introduce oscillation possibilities. No eigenvalue stability analysis was done. No formal Lyapunov proof exists. So theoretically: it may be stable, it may oscillate, or it may overreact in delay-only environments. This is expected and known.

Published Results

All results I have published are from my own tc-based network simulations(one real world case also incuded). They are honest and accurate to my testing conditions. I am not claiming they generalize beyond those conditions. NDM-TCP showed promising results in those simulations — which is meaningful even for a hobby experiment at this stage.


The AI-Assisted Build: What Really Happened

My Honest Contribution: 20–30%

I need to be clear about this: my personal contribution to the actual coding of NDM-TCP was roughly 20–30% of the full process. The implementation relied heavily on AI assistance.

The reason is straightforward. The Linux kernel TCP congestion control API involves headers like net/tcp.h, tcp_cong.h, and other low-level kernel interfaces. Manually reading through all of that documentation from scratch — while having limited time — was not realistic for me at this stage. I did not want to spend weeks navigating kernel API structures before getting to the part I actually cared about.

If I had the time — if I had first completed the mathematics properly, then studied the kernel internals deeply, then built — that is the order I would have followed. But that window may not come on a predictable schedule. More on that below.

What AI did: handled the kernel API boilerplate, translated my conceptual intentions into valid kernel C, and helped me navigate documentation I didn't have time to absorb manually.

What I did: provided the ideas, the design decisions, the conceptual structure (entropy, recurrence, plasticity), the understanding of what I was building, and the judgment to evaluate whether results made sense.

The concepts are mine. The implementation process was heavily assisted. That distinction matters.


The Usefulness of AI in Building Systems

When used correctly, AI assistance is genuinely powerful.

It removes the blank page problem. Starting a kernel module from scratch requires knowing where to begin — which structs to register, which callbacks to implement, how the module lifecycle works. AI can generate a valid skeleton in seconds. That is real value, especially when you understand what the skeleton is doing.

It compresses documentation. Reading through net/tcp.h line by line takes time. AI can answer targeted questions about it and let you understand what you need without wading through everything at once.

It accelerates the feedback loop. Instead of spending two days wiring up boilerplate before you can test an idea, you spend two hours. More ideas get tested. More things get learned from doing.

It keeps your curiosity alive. For someone like me — a teenager with limited time but genuine curiosity — AI let me actually build the thing I was thinking about, instead of watching the window close before I ever started.

It is genuinely experience-building when used right. Working with AI-generated code that you then read, understand, modify, and test is not the same as copy-pasting code you don't understand. The former builds real capability. I came out of this project knowing significantly more about kernel module structure, TCP internals, and congestion control mechanics than when I started — because I engaged with the code even though I didn't write all of it from scratch.


The Drawbacks of AI in Building Systems

These are real and worth naming clearly.

You can build faster than you understand. This is the core risk. AI can generate code that works — compiles, runs, produces results — faster than your understanding of that code can keep up. If you are not careful, you end up with something functional that you cannot fully explain. That is a fragile position to be in.

It can mask gaps in your knowledge. If I had written every line manually, I would have immediately hit walls that told me exactly what I didn't know. With AI assistance, those walls become invisible. You bypass them — and the gap stays.

The code may be correct without you understanding why. This is especially dangerous in systems work. A kernel module that runs without crashing is not the same as a kernel module you understand. AI-generated low-level code can pass surface-level checks while containing subtle assumptions you are unaware of.

You cannot debug what you don't understand. This is where AI assistance most often comes back to hurt people. When something breaks — and it will break — you need to understand the system to fix it. If your understanding is shallow because AI did the heavy lifting, debugging becomes guesswork.

It can create false confidence. Building something that works feels good. It should. But the feeling of "I built this" can blur the line between "I designed and implemented this" and "I directed an AI to build this while I supervised." Both have value, but they are not the same thing. Confusing them leads to overestimating where you actually stand.


What This Means: How to Use AI Without Losing Yourself

The lesson from building NDM-TCP is not "AI is bad" or "AI is great." It is more specific.

AI is a tool that amplifies what you bring to it. If you bring ideas, conceptual understanding, and critical judgment — AI makes you faster and more capable. If you bring nothing but a vague goal — AI produces something you cannot own, cannot debug, and cannot build on.

At minimum, understand at the abstract level. I built NDM-TCP without full mathematical rigor, but I understood what entropy measures, what a recurrent structure does, what plasticity is trying to simulate. That abstract understanding was what made the project real rather than just generated code. You do not need a PhD to build something meaningful. But you need something — some genuine understanding of what you are building and why.

Build things yourself without AI too. The fact that I had already built real-world applications in Python and C without AI assistance meant I had a reference point. I knew what it felt like to actually write code, hit real errors, navigate real documentation. That context made the AI-assisted experience useful rather than just a shortcut. Without it, I would have had no baseline.

The boilerplate reduction is real value — use it. Nobody needs to manually write the same Makefile structure every time, or look up every kernel callback signature from scratch. AI handling that is a genuine productivity gain. The key is knowing that what AI is doing is boilerplate reduction — and staying mentally engaged with everything above boilerplate level.


Where I Stand Now

I am currently at:

  • Abstract theoretical understanding of congestion control
  • Basic mathematical modeling (conceptual, not rigorous)
  • Functional kernel implementation level (with heavy AI assistance)
  • Intermediate Python and C
  • Basic assembly, self-taught from documentation
  • Self-taught networking fundamentals: ARP, VLANs, STP
  • Not at advanced engineering math level

And that is okay — as long as I am honest about it.


What Comes Next

I am freezing NDM-TCP for now. This is a deliberate choice.

What comes next is completing self-study in the foundational areas I am missing — mathematics, linear algebra, calculus, and eventually control theory and stability analysis. After completing that self-study, may I revisit NDM-TCP and rewrite it properly? Maybe. Maybe not. I am not making that promise to myself or anyone else.

There is an honest tension worth naming: I do not know if formal college study will take me where I actually want to go. Curricula have their own direction. The things I am genuinely curious about — kernel internals, eBPF, network systems theory, low-level AI — may not be on the syllabus. My curiosity pulls toward technical depth that a standard engineering program might not allow. So "I will do this properly after formal study" is not a reliable plan. It might never happen if I leave it entirely to the system.

That is part of why I built NDM-TCP now, imperfectly, with heavy AI assistance — because the curiosity was here, the time was limited, and waiting for perfect conditions is how ideas die.

Self-study areas planned:(adds more as needed)

  • Linear algebra (Gilbert Strang, MIT OpenCourseWare)
  • Calculus (properly)
  • Eventually: eigenvalues, stability analysis, control theory basics

Future domains I want to explore:

  • Operating systems and Linux internals
  • eBPF
  • Networking systems
  • AI systems
  • Cybersecurity and reverse engineering

C remains my core language for OS-level work.


Honest Positioning of NDM-TCP

To be clear about what this project is:

  • Experimental — built to explore ideas, not to deploy
  • Educational — I learned more from building it than from any documentation alone
  • Heavily AI-assisted in implementation — concepts mine, coding process roughly 20–30% mine
  • Not academically proven — no stability proof, no fairness analysis, no convergence guarantee
  • Not production-certified — not meant to compete with Reno, CUBIC, or BBR
  • Promising in self-tested simulations — results are honest within their scope

It is a learning system built from curiosity. That is what it is.


For Anyone Reading This

If you are self-taught, still young, curious about systems and networking and AI, and wondering whether you can build something real — you can. AI makes that more accessible than ever before.

But understand clearly what you are doing when you use it.

Functional code is not the same as theoretical proof. Building something that works does not mean you fully understand it. Both things can be true at once — that is fine — but do not confuse them.

Using AI to build is experience — if you stay engaged. If you read the code, modify it, test it, break it, and understand what each part is trying to do — even at an abstract level — you are learning. If you just copy and run output without engaging, you are not.

Know where you actually stand. This is the hardest and most important thing. I know roughly what I understand deeply, what I understand abstractly, and what I assisted rather than authored. That clarity is more valuable than the project itself.

Building something is step one. Understanding it properly is step two. Both matter. Neither replaces the other.


Final Reflection

NDM-TCP showed promising results in simulation. It is a real kernel module running on a real Linux system. The results I published are honest.

But what I am most proud of is not the module — it is that I can write this article. That I can say exactly where the AI contribution ends and mine begins. That I know what the gaps in my understanding are and can name them. That I built something while being fully aware of what I was and was not doing.

That clarity is the most important thing a self-taught builder can develop.

Top comments (0)