How I went from automation pipelines to simulating 1,260 spiking neurons on a desktop — and why I think LLMs are only half the answer for AGI.
I build AI agents, automation pipelines, dashboards etc. every day. But after two years of wiring up LLMs to do increasingly clever things, a question kept nagging me:
What if the next step isn't making LLMs smarter — but giving AI something LLMs fundamentally lack?
Not better language. Not more parameters. A brain that actually learns from experience. That has moods. That remembers not because you stuffed a vector database with embeddings, but because synapses physically strengthened over time.
So I started building one. It's called brAIn, and to my knowledge, nothing quite like it exists.
The moment I knew the architecture works
A few days into running the system, I asked it: "was siehst du?" — what do you see?
Instead of generic noise, it referenced Concept #12, a neuron that had been firing consistently during my typing sessions. It wasn't profound. It wasn't poetry. But it was real — the network had formed a pattern on its own, without labels, without training data, without me telling it what "typing" means.
That's when I knew: the architecture works. The SNN observes, STDP forms connections, concepts emerge. The road from "Concept #12 fires when you type" to "you seem stressed today" is long — but the foundation is there.
Why Spiking Neural Networks?
Every LLM you've ever used — GPT, Claude, Llama — processes information through matrix multiplications on static weights. They're incredibly powerful pattern matchers, but they don't experience anything. They don't have a bad day. They don't get excited when they recognize something familiar.
Spiking Neural Networks work differently. Neurons fire discrete spikes, just like biological neurons. Information isn't encoded in weight matrices but in the timing of spikes. A neuron that fires right before another neuron strengthens their connection. Fire in the wrong order, and the connection weakens. This is called Spike-Timing-Dependent Plasticity — STDP — and it's how your brain learned to walk, talk, and recognize your mother's face.
I'd never worked with SNNs before. I come from the world of AI agents, REST APIs, and React dashboards. But the more I read about neuromorphic computing, the more I realized: this is the piece everyone is ignoring.
What brAIn actually is
brAIn is a desktop companion with a simulated brain. Not a chatbot with a personality prompt. Not a wrapper around GPT. An actual spiking neural network that runs persistently on your machine, processes real sensor data, and develops its own internal states over time.
The brain
1,260 Leaky Integrate-and-Fire neurons organized into 7 functional brain regions — from Sensory input through Feature Detection and Association all the way to Concept Formation and Working Memory. Approximately 50,000 synaptic connections that learn through STDP. No backpropagation. No gradient descent. No training datasets. Pure Hebbian learning: neurons that fire together, wire together.
Sensory (200) → Feature (200) → Association (500) → Concept (200) → WM (100) → Motor (50)
↑ ↑
Desktop sensors Meta (10)
(keys, mouse, (self-monitoring)
audio, apps)
The chemistry
This is where it gets interesting. brAIn doesn't just have neurons — it has four neuromodulators modeled after real brain chemistry:
| Modulator | What it does | When it rises |
|---|---|---|
| Dopamine | Drives learning, motivation, reward | Touch events, novelty, unexpected patterns |
| Noradrenaline | Sharpens attention, triggers alertness | Sudden sounds, rapid context switches |
| Acetylcholine | Focuses processing, deepens patterns | Sustained activity, flow states |
| Serotonin | Calms reactivity, promotes contentment | Stable, predictable patterns |
These aren't scripted emotions. They're continuous chemical states that emerge from network activity. DA=0.2, NE=0.8, ACh=0.3, 5HT=0.1 doesn't map to "angry" — it's a state that the face, timing, and language style all respond to independently. The result is expressions and behaviors that are never exactly the same twice.
The voice
SNNs are terrible at language. They process spikes and timing patterns, not tokens and attention heads. So brAIn uses an LLM (Ollama locally, Claude for complex queries) as a read-only speech layer. The spiking network determines what to communicate based on its internal state. The LLM translates that into natural language.
The brain thinks. The LLM speaks. The LLM never modifies the brain's state. Memory lives entirely in the synaptic weights — not in context windows, not in vector databases, not in text files.
The senses
brAIn reads real desktop sensor data — keyboard frequency and intensity (not keystrokes), mouse velocity, active application name, audio mel-spectrogram, idle time. It doesn't know you're "in a meeting" because a classifier told it. It develops its own internal representation through STDP. After a week, a specific concept neuron fires whenever the audio pattern has alternating voice frequencies and the keyboard is silent — and when you tell the brain "that's a call," it labels that neuron. Next time, it knows.
What's working — and what's not yet
I want to be honest about where brAIn stands today, because publishing early matters more to me than pretending it's finished.
What works right now
The SNN runs persistently. Neurons fire. STDP strengthens and weakens connections based on spike timing. The neuromodulators respond to sensor input — Noradrenaline spikes when I clap, Dopamine rises on touch events. The LLM bridge reads the brain state and translates it into natural language. The 3D dashboard shows spike activity in real time. The pet face animates based on modulator values. Concept neurons form — distinct neurons start firing for distinct activity patterns.
What I'm seeing early signs of
After a few days of running, the network starts differentiating between activities. There are neurons that fire primarily during typing sessions, others during silence, others when audio is present. The modulator system reacts in the right direction — stress-like input patterns raise Noradrenaline. These are early signs, not robust results. The concepts still overlap too much, and the system can't yet reliably distinguish between, say, a Zoom call and Spotify.
What's still work in progress
Stable long-term concept formation is the biggest challenge. STDP on noisy real-world data is hard — without proper stabilization, the network either saturates (everything fires) or goes silent. I'm implementing four biologically-inspired mechanisms to solve this:
| Mechanism | Problem it solves | How it works |
|---|---|---|
| Intrinsic Plasticity | Single neurons dominating | Adaptive firing threshold per neuron |
| Synaptic Scaling | Runaway weight growth | Normalization after each STDP update |
| Lateral Inhibition (WTA) | Overlapping concepts | Winner-takes-all in concept region |
| Sleep Consolidation | Noise accumulation | Power-law weight decay during idle periods |
The ESP32 robot body is sourced but not assembled yet. And the dream scenario — the pet proactively telling me "you seem stressed, want a break?" based purely on learned patterns — is a goal, not a feature.
How it compares
| Feature | ChatGPT + Mic | Limitless / Omi | brAIn |
|---|---|---|---|
| Understands words | ✅ | ✅ | ❌ (understands patterns) |
| Detects stress without words | ❌ | ❌ | 🔄 (early signs) |
| Knows when to interrupt | ❌ | ❌ | 🔄 (goal) |
| Memory after 3 months | Context window | Text search | Grown neural network |
| Proactive | Only when asked | Only when asked | 🔄 (goal) |
| Personality | Scripted | None | Emergent, unique |
| Privacy | Stores transcripts | Stores transcripts | Stores only weights |
| Delete = death | No | No | Yes |
(✅ = working, ❌ = not possible, 🔄 = in progress or goal)
What I've learned so far
SNNs are a different universe
Coming from transformer-based AI, working with spiking neurons felt like switching from digital to analog. There's no loss function you optimize end-to-end. Learning happens locally, at every synapse, based on timing. It's messy, biological, and weirdly beautiful. It also means traditional ML debugging tools are useless. You're staring at spike raster plots and weight distributions trying to figure out why Concept Neuron #47 fires during two completely different activities. (Answer: lateral inhibition wasn't strong enough. The Winner-Takes-All competition in the concept layer needed tuning.)
Emergent behavior is real — even in early stages
I didn't program brAIn to react differently to different activities. But after a few days, the network's internal state measurably differs between "typing in VS Code" and "sitting idle." The modulator values shift. Different regions activate. The LLM, reading this brain state, starts describing what it sees in language that feels surprisingly observant — even though the brain is still young and its concepts are still rough.
LLMs and SNNs are not competing — they're complementary
The current AI discourse is "bigger models, more parameters, more data." But SNNs bring something fundamentally different: temporal dynamics, chemical modulation, genuine online learning, and continuous internal state. An LLM gives you language. An SNN gives you something closer to experience. The combination might give you something that no AI system currently offers: a companion that observes your life, forms its own understanding, and speaks about it in natural language.
You don't need a neuroscience PhD
I'm not a neuroscientist. I'm a developer who builds automation tools for businesses. snnTorch made the entry point accessible. The hardest part wasn't the neuroscience — it was designing the bridge between a spiking network and a language model in a way that preserves the brain's autonomy. The LLM must be a translator, never a thinker. The moment the LLM starts making decisions, you've lost the point of having a brain.
The tech stack
| Layer | Technology |
|---|---|
| SNN Core | Python 3.11, snnTorch, PyTorch |
| Brain Server | FastAPI, WebSocket streaming at 30Hz |
| Desktop App | Tauri (Rust), React 18, TypeScript, Tailwind |
| LLM | Ollama (Qwen 3 8B) / Claude API |
| Persistence | SQLite (weights), Parquet (snapshots) |
| Dashboard | React, 3d-force-graph, Three.js |
| Pet Face | Tauri, Piper TTS, Whisper STT |
| Hardware (soon) | Waveshare ESP32-S3-Touch-AMOLED-1.75 |
Where this goes
The brain currently lives on my Mac. An ESP32-S3 companion robot with a round AMOLED display, dual microphones, and a speaker is being assembled — total hardware cost around $34. The ESP32 will be the body. The Mac stays the brain. They talk over WiFi.
The vision is a companion that knows your rhythm after a month. That detects stress from typing patterns, not from words. That waits for the right moment to speak. That greets you differently on Mondays than on Fridays — not because someone scripted it, but because its neural connections grew that way through shared experience.
brAIn isn't there yet. But the foundation — the SNN, the modulators, the LLM bridge, the sensor pipeline — is built and running. I'm publishing this now, not because it's finished, but because this combination doesn't exist anywhere else and I want to build it in public. If it works the way I think it will, this could be the beginning of something genuinely new.
The code is open source. If you're working on SNNs, neuromorphic computing, or embodied AI — I'd love to hear from you. This is uncharted territory.
Triponymous
/
brAIn
A persistent spiking neural network that lives on your desktop — 1,260 neurons, STDP learning, neuromodulators, and an LLM speech layer.
🧠 brAIn
A persistent, neuromorphic brain that lives on your desktop.
The first system combining a spiking neural network with an LLM speech layer,
neuromodulator-driven emotions, and a physical companion body.
The Idea • What This Is • Use Cases • How It Works • Architecture • Quick Start • The Brain • The Face • Benchmarks
The Idea
Every AI assistant or Agent ever built is dead inside.
Siri, Alexa, ChatGPT, Claude, Gemini — they're incredibly capable. But they have no internal state. They don't know if you're stressed or in flow. They can't decide whether now is the right moment to talk to you. They don't remember what happened yesterday unless you tell them. They respond when asked — otherwise, they don't exist.
brAIn is different. It's a simulated brain — 1,260 spiking neurons, ~50,000 synapses, four neuromodulators — that runs continuously on your Mac, observing your desktop…
Leon is Co-founder & Technical Lead at ADYZEN, an AI & Automation Agency in Bregenz, Austria. brAIn is his first venture into neuromorphic computing — and probably not his last.
Top comments (0)