## note:
i’m just a 14 yo newbie in this stuff (i watched no courses). i just tried to observe my behaviors and how people react, then implement it into code with no hard coding (just raw). so don’t expect much — i just need feedback.
## 📝 details:
• built from scratch — no ML libraries or frameworks
• just pure logic + 3 built-in libraries
• it tries to mimic how a brain works: raw, flexible, always learning
• it’s so optimized it can run 10+ layers on a phone
• it currently predicts the next character from an input string.
• i paused the looping/thinking part until i confirm it’s actually learning properly.
• it’s super modular — you can tweak the number of neurons/layers in real time, and it adapts on the fly (like a real brain).
• it even handles corrupted or missing save files like a champ.
• ASCII input only for now, but i’m working on vision + hearing modules next.
• it learns live while running, so yeah… it might fry your potato laptop.
• you can even mutate two brains by merging their save files (yes, brain children exist 💀)
this is the second version, btw.
## 🔗 links:
test it on your browser (my server + ngrok for forwarding)
watch a video where i train it on a phone (yes... a phone)
## 🔍 if you want proof it works:
check the video link above where i enter “hello” twice on a phone 📱
you’ll clearly see the total error drop massively in just one loop (like 634 to 62, or 500 to 2)
## ⌨️ commands:
/options
— enter setup process
/save
— saves current weights
/exit
— saves + exits
## ⚠️ note:
use a lower learning rate if the total error keeps going up (might be exploding gradients — i think it can recover from it though)
Top comments (1)
any feedback?!, just train it for some time and tell me what do you observe, is it learning?