Anyone with a bit of JavaScript and a library like TensorFlow.js can spin up a neural network in an afternoon. You feed it some data, call .train()
, and watch the numbers fly. There's a moment of magic when .predict()
gives you a result.
But making it produce the right result, consistently, across new and unexpected data? That's a different journey entirely.
This repository is a chronicle of that journey. It started as a simple sentiment analyzer and evolved into a complex, hierarchical, self-learning system I call the "AI Corporation." The final code is not the most important artifact here; the evolution is.
The Journey: From a Simple Script to a Flawed Mind
Our path wasn't a straight line. It was a cycle of building, testing, failing, and gaining a deeper understanding at each step.
Phase 1: The Black Box
It began with a single, "flat" neural network. It worked, sometimes. But when it failed, it was impossible to know why. It was a black box. Lesson: A working model isn't a smart model. If you can't debug its reasoning, you can't trust it.
Phase 2: The "AI Corporation" Architecture
To solve the black box problem, I broke the task down. I created a hierarchical system with distinct layers of responsibility:
- L1 Experts: Simple functions for raw text analysis.
- L2 Managers: Logic-based modules to interpret the experts' reports.
- L3 Director: The final neural network, making strategic decisions based on clean, structured data from the managers. Lesson: A good architecture is more valuable than a complex algorithm. Interpretability is key.
Phase 3: Building the Engine from Scratch
To truly understand what was happening, I threw away TensorFlow.js and built my own neural network engine from the ground up: DirectorNet.js
. Implementing the matrix math, the feedforward algorithm, and the backpropagation of errors was the most enlightening part of this project.
Lesson: You don't truly understand a tool until you can build a simple version of it yourself.
Phase 4: The Paradox of Self-Learning
I gave the system a feedback loop. It could now process new, unlabeled data, assign a verdict, and use that new "experience" in its next training cycle.
And that's when it broke in the most fascinating way.
It started making small errors, which were then saved as "experience." In the next cycle, it trained on its own mistakes. This created a feedback loop of negativity. My AI developed a Model Bias. It became a paranoid pessimist, classifying almost every neutral review as a complaint because its "life experience" was tainted by its own past errors.
Lesson: Self-learning is a powerful but dangerous tool. An AI that learns from its own flawed conclusions will only become more confident in its mistakes.
Phase 5: The AI Trainer
The final stage wasn't about writing more complex code. It was about becoming an AI trainer. The solution was to "re-educate" the model by creating a "Cognitive Retraining" process, forcing it to give more weight to a "golden set" of human-verified data over its own flawed experience.
The Real Challenge Isn't the Code
This project proved to me that the code is the easy part. The real work of creating a useful AI is in:
- Architecture: Designing a system that is transparent and debuggable.
- Data Curation: Understanding that the model's "mind" is a direct reflection of the data you feed it. Garbage in, garbage out.
- Debugging Logic, Not Just Code: Finding the root cause of a bad decision not in a syntax error, but in a flawed "belief" the model formed during training.
- Understanding Limitations: Knowing when a neural network is the right tool, and when a simple
if/else
statement is more reliable.
Top comments (0)