Neural networks power modern AI — but for many developers, they still feel like magic.
Not because the math is impossible, but because most explanations are either:
too theoretical, or
hidden behind high-level libraries.
I built the Neural Network Lexicon to fix that.
What Is the Neural Network Lexicon?
It’s a concept-by-concept reference for neural networks, explained from first principles.
One concept per page.
Clear definitions.
No framework lock-in.
Each entry answers:
- What is this concept?
- Why does it matter?
- How does it work conceptually?
- What usually goes wrong?
And yes — every concept includes a minimal Python example to make the computation visible.
Why Python (and Why Minimal)?
The Python snippets are intentionally small.
Not to build full models — but to show that:
neural networks are just computations.
Seeing a neuron as a weighted sum or a loss function as a number you can print changes how you think about ML.
Runnable Examples on GitHub
To keep the lexicon readable, full runnable examples live in GitHub:
- One idea per file
- No frameworks
- Edit → run → observe
Read the concept, run the code, tweak a value, and learn faster.
- What Does It Cover?
The lexicon is complete, not just introductory:
- Core foundations (neurons, activations, loss)
- Training & optimization
- CNNs, RNNs, Transformers
- Generalization & robustness
- Explainability, uncertainty, fairness
- Deployment & model lifecycle
In total: 100 structured entries.
Who Is This For?
- Developers using ML libraries who want real understanding
- Students overwhelmed by fragmented explanations
- Engineers who want to debug models, not just train them
If you believe understanding comes before optimization, this is for you.
📘 Neural Network Lexicon (GitHub Wiki)
Built as part of SolveWithPython — learning by understanding, not memorizing.
Neural networks aren’t magic.
Once you understand what they compute, everything else follows.
Top comments (0)