Every AI agent framework has the same architecture: a conductor telling tools what to do.
LangGraph: state machine orchestrator. CrewAI: manager assigns tasks. AutoGPT: loop until done.
They all assume someone has to be in charge.
I built one where nobody is in charge. And it works better than I expected.
It Started With a Book
"Sparks of Genius" by Robert and Michele Root-Bernstein (1999). Not an AI book. Not even a CS book. A book about how creative people think.
They studied Einstein, Picasso, da Vinci, Feynman, Nabokov, Virginia Woolf — people from completely different fields. And found something weird:
They all use the same 13 thinking tools.
Observing
Imaging
Abstracting
Recognizing Patterns
Forming Patterns
Analogizing
Body Thinking
Empathizing
Dimensional Thinking
Modeling
Playing
Transforming
Synthesizing
Physics, painting, music, literature — different domains, identical cognitive operations. For 500 years.
That hit me: if these are truly universal, can AI use them too?
The Problem With Orchestrators
Here's how every agent framework works:
Orchestrator decides → Tool A runs → Tool B runs → Combine results
It's a Western orchestra. The conductor points at the violins, then the cellos, then says "together now."
But that's not how creative thinking works. When Einstein had an insight, he didn't run "observe() first, then abstract(), then analogize()" in sequence. Everything fired simultaneously. Pattern recognition triggered analogy which triggered abstraction which fed back into observation.
The book calls this synthesis — not sequential processing, but simultaneous cognition.
So I asked: what if the "orchestrator" isn't a conductor at all?
A Nervous System, Not a CEO
I went back to biology. Not the brain — the nervous system.
The difference matters:
Brain (CEO) Nervous System
Role Decides, commands Senses, signals
Control Top-down Distributed
Without it Nothing works Things still work (reflexes)
Failure mode Single point of failure Graceful degradation
A starfish has no brain. Five arms, no central controller. Yet it moves, eats, and coordinates perfectly. How? Each arm's local nervous system talks to its neighbors through simple signals. No commander. Emergent coordination.
An insect's legs walk without the brain telling each leg what to do. Local ganglia sense the ground and coordinate through local rules.
That's what I built.
17 Biological Principles in Code
I didn't just name it "nervous system" and use if-statements. I actually studied neuroscience and implemented real mechanisms:
Threshold Firing
Real neurons don't fire at every input. Signals accumulate. When they cross a threshold: FIRE. Then reset. Then a refractory period where nothing can trigger them.
class SignalPotential:
value: float = 0.0
threshold: float = 0.6
def contribute(self, amount: float):
# Excitation (+) or inhibition (-)
if self.refractory > 0: return
self.value = clamp(self.value + amount, -1, 1)
def check_fire(self) -> bool:
if self.value >= self.threshold:
self.fired = True
self.value = 0.0 # reset
self.refractory = 1 # cooldown
return True
return False
Not a boolean. Not a threshold check. A neuron.
Habituation
Repeated stimulus → decreased response. The third time you see the same pattern type, it matters less. The tenth time, it's background noise.
weight = 1.0 / (1.0 + log(times_seen))
1st: 1.0, 3rd: 0.59, 10th: 0.30
This automatically surfaces novel findings and suppresses repetition. No explicit filtering needed.
Hebbian Plasticity
"Fire together, wire together." If tool A runs before tool B and the result is good, that A→B connection strengthens. Bad result? It weakens. Very weak connections get pruned entirely.
if success:
synapses["A→B"] = min(2.0, current * 1.1 + 0.05) # LTP
else:
synapses["A→B"] = max(0.1, current * 0.9 - 0.05) # LTD
The system literally rewires itself based on what works.
And 14 More...
Excitation/Inhibition: Every tool contributes positive or negative to every signal
Sensitization: Anomalies boost sensitivity in related channels
Reflexes: Budget exhaustion → immediate stop, no LLM call needed
Proprioception: System tracks its own activity, cost, momentum
Homeostasis: Overactive tools auto-inhibited, underactive tools boosted
Lateral Inhibition: Tools compete; most relevant wins, rest suppressed
Predictive Coding: Round 1 = prediction; Round 2 processes only surprises
Neuromodulation: Dopamine (reward), norepinephrine (explore/exploit), acetylcholine (learning rate)
Autonomic Modes: Anomaly → sympathetic (explore wide); Convergence → parasympathetic (integrate deep)
Sleep/Consolidation: Between rounds, prune noise, strengthen successful paths, merge duplicates
Distributed Control: Each tool has should_run() — decides locally, no central order
Signal Summation: Multiple tools contribute to one signal over time
Oscillation Groups: Tools grouped by rhythm (sensory/processing/integration/exploration)
Stochastic Resonance: Controlled noise helps detect weak signals
LangGraph has 0 of these. CrewAI has 0. AutoGPT has 0.
The Picasso Bull Method
The abstraction tool uses what I call the Picasso Bull method.
Picasso drew a bull 11 times. The first was realistic. Each iteration, he removed details. The final drawing: a few lines. The essence of a bull.
Round 1: 20 patterns found
Round 2: Remove what's redundant → 12 patterns
Round 3: Remove what's subsumed → 5 principles
Feynman test: "Remove one more. Does it break?" → 3 core laws
"Phenomena are complex. Laws are simple. Find out what to discard." — Feynman
The system does this automatically. Multiple rounds of progressive elimination until only what's essential remains.
What It Actually Does
sparks run \
--goal "Find the core architecture principles" \
--data ./claude_code_analysis/ \
--depth standard
I tested it on blog posts analyzing the Claude Code source leak. Three articles in, three architecture laws out:
Verification Time Decay Law (88%) "Cached state accuracy monotonically decreases after verification. Recomputation at high-risk boundaries is not waste — it's an accuracy premium."
Evidence > Declared Intent Law (93%) "A system's real priorities are encoded in resource allocation decisions. Implementation is truth, not documentation."
Defense Incompatibility Law (82%) "Effective isolation requires each layer's failure modes to be unreachable from adjacent layers. The number of separated foundations — not security layers — is the real metric."
Plus structural analogies to the Maginot Line and Chernobyl reactor design.
Not bad for 3 text files and $0.50.
The Architecture
Goal + Data
│
╔════════════════════════════════════════════╗
║ LAYER 0: NERVOUS SYSTEM ║
║ 17 biological principles ║
║ Senses state. Doesn't command. ║
╠════════════════════════════════════════════╣
║ LAYER 1: 13 THINKING TOOLS ║
║ Each runs autonomously. ║
║ No fixed order. No conductor. ║
║ Tools sense shared state and decide. ║
╠════════════════════════════════════════════╣
║ LAYER 2: AI AUGMENTATION ║
║ Strategic forgetting ║
║ Contradiction holding ║
║ Predictive coding ║
╚════════════════════════════════════════════╝
│
Core Principles + Evidence
Being Honest: The Ablation Test
I ran the same analysis with the nervous system ON and OFF.
Metric Nervous ON Nervous OFF
Confidence 65% 82%
Coverage 75% 75%
Cost $2.28 $2.46
On this small dataset, the simple pipeline produced slightly higher confidence scores. I'm publishing this because I think honesty matters more than marketing.
But here's what I believe: the nervous system is an investment in long-term learning. Habituation, plasticity, consolidation — these compound over time. On 3 files and 2 rounds, you don't see it. On 50 files and 10 rounds, you would.
Like a real nervous system — you don't notice it working until it's gone.
The Part That Blows My Mind
Every line of code was written by Claude Code.
Not "I wrote most of it and Claude helped." Zero lines typed by a human. I directed. I decided. I said "this is wrong" and "this is right." But the actual code? All AI.
22 design documents. 3,300 lines of Python. 17 biological principles. Working demo. One day.
I think we're entering an era where what you think matters more than what you code. The 1999 book was right — the thinking tools are universal. They work for humans, and now they work for AI too.
Try It
git clone https://github.com/PROVE1352/cognitive-sparks
cd cognitive-sparks
pip install -e .
sparks run --goal "Find the core principles" --data ./demo/claude_code_posts/ --depth quick
Works with Claude Code CLI (free with subscription) or Anthropic API.
MIT License. Open source. Contributions welcome — especially the 6 unimplemented thinking tools.
GitHub: PROVE1352/cognitive-sparks
Built entirely with Claude Code. The framework that thinks about thinking — was thought into existence.
Top comments (0)