<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ‍경규찬</title>
    <description>The latest articles on DEV Community by ‍경규찬 (@kyuchan).</description>
    <link>https://dev.to/kyuchan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kyuchan"/>
    <language>en</language>
    <item>
      <title>I Read a 1999 Book About How Geniuses Think. Then I Built an AI Framework With a Real Nervous System.</title>
      <dc:creator>‍경규찬</dc:creator>
      <pubDate>Sat, 04 Apr 2026 01:37:13 +0000</pubDate>
      <link>https://dev.to/kyuchan/i-read-a-1999-book-about-how-geniuses-think-then-i-built-an-ai-framework-with-a-real-nervous-3h8b</link>
      <guid>https://dev.to/kyuchan/i-read-a-1999-book-about-how-geniuses-think-then-i-built-an-ai-framework-with-a-real-nervous-3h8b</guid>
      <description>&lt;p&gt;Every AI agent framework has the same architecture: a conductor telling tools what to do.&lt;/p&gt;

&lt;p&gt;LangGraph: state machine orchestrator. CrewAI: manager assigns tasks. AutoGPT: loop until done.&lt;/p&gt;

&lt;p&gt;They all assume someone has to be in charge.&lt;/p&gt;

&lt;p&gt;I built one where nobody is in charge. And it works better than I expected.&lt;/p&gt;

&lt;p&gt;It Started With a Book&lt;br&gt;
"Sparks of Genius" by Robert and Michele Root-Bernstein (1999). Not an AI book. Not even a CS book. A book about how creative people think.&lt;/p&gt;

&lt;p&gt;They studied Einstein, Picasso, da Vinci, Feynman, Nabokov, Virginia Woolf — people from completely different fields. And found something weird:&lt;/p&gt;

&lt;p&gt;They all use the same 13 thinking tools.&lt;/p&gt;

&lt;p&gt;Observing&lt;br&gt;
Imaging&lt;br&gt;
Abstracting&lt;br&gt;
Recognizing Patterns&lt;br&gt;
Forming Patterns&lt;br&gt;
Analogizing&lt;br&gt;
Body Thinking&lt;br&gt;
Empathizing&lt;br&gt;
Dimensional Thinking&lt;br&gt;
Modeling&lt;br&gt;
Playing&lt;br&gt;
Transforming&lt;br&gt;
Synthesizing&lt;br&gt;
Physics, painting, music, literature — different domains, identical cognitive operations. For 500 years.&lt;/p&gt;

&lt;p&gt;That hit me: if these are truly universal, can AI use them too?&lt;/p&gt;

&lt;p&gt;The Problem With Orchestrators&lt;br&gt;
Here's how every agent framework works:&lt;/p&gt;

&lt;p&gt;Orchestrator decides → Tool A runs → Tool B runs → Combine results&lt;br&gt;
It's a Western orchestra. The conductor points at the violins, then the cellos, then says "together now."&lt;/p&gt;

&lt;p&gt;But that's not how creative thinking works. When Einstein had an insight, he didn't run "observe() first, then abstract(), then analogize()" in sequence. Everything fired simultaneously. Pattern recognition triggered analogy which triggered abstraction which fed back into observation.&lt;/p&gt;

&lt;p&gt;The book calls this synthesis — not sequential processing, but simultaneous cognition.&lt;/p&gt;

&lt;p&gt;So I asked: what if the "orchestrator" isn't a conductor at all?&lt;/p&gt;

&lt;p&gt;A Nervous System, Not a CEO&lt;br&gt;
I went back to biology. Not the brain — the nervous system.&lt;/p&gt;

&lt;p&gt;The difference matters:&lt;/p&gt;

&lt;p&gt;Brain (CEO) Nervous System&lt;br&gt;
Role    Decides, commands   Senses, signals&lt;br&gt;
Control Top-down    Distributed&lt;br&gt;
Without it  Nothing works   Things still work (reflexes)&lt;br&gt;
Failure mode    Single point of failure Graceful degradation&lt;br&gt;
A starfish has no brain. Five arms, no central controller. Yet it moves, eats, and coordinates perfectly. How? Each arm's local nervous system talks to its neighbors through simple signals. No commander. Emergent coordination.&lt;/p&gt;

&lt;p&gt;An insect's legs walk without the brain telling each leg what to do. Local ganglia sense the ground and coordinate through local rules.&lt;/p&gt;

&lt;p&gt;That's what I built.&lt;/p&gt;

&lt;p&gt;17 Biological Principles in Code&lt;br&gt;
I didn't just name it "nervous system" and use if-statements. I actually studied neuroscience and implemented real mechanisms:&lt;/p&gt;

&lt;p&gt;Threshold Firing&lt;/p&gt;

&lt;p&gt;Real neurons don't fire at every input. Signals accumulate. When they cross a threshold: FIRE. Then reset. Then a refractory period where nothing can trigger them.&lt;/p&gt;

&lt;p&gt;class SignalPotential:&lt;br&gt;
    value: float = 0.0&lt;br&gt;
    threshold: float = 0.6&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def contribute(self, amount: float):
    # Excitation (+) or inhibition (-)
    if self.refractory &amp;gt; 0: return
    self.value = clamp(self.value + amount, -1, 1)

def check_fire(self) -&amp;gt; bool:
    if self.value &amp;gt;= self.threshold:
        self.fired = True
        self.value = 0.0        # reset
        self.refractory = 1     # cooldown
        return True
    return False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Not a boolean. Not a threshold check. A neuron.&lt;/p&gt;

&lt;p&gt;Habituation&lt;/p&gt;

&lt;p&gt;Repeated stimulus → decreased response. The third time you see the same pattern type, it matters less. The tenth time, it's background noise.&lt;/p&gt;

&lt;p&gt;weight = 1.0 / (1.0 + log(times_seen))&lt;/p&gt;

&lt;h1&gt;
  
  
  1st: 1.0, 3rd: 0.59, 10th: 0.30
&lt;/h1&gt;

&lt;p&gt;This automatically surfaces novel findings and suppresses repetition. No explicit filtering needed.&lt;/p&gt;

&lt;p&gt;Hebbian Plasticity&lt;/p&gt;

&lt;p&gt;"Fire together, wire together." If tool A runs before tool B and the result is good, that A→B connection strengthens. Bad result? It weakens. Very weak connections get pruned entirely.&lt;/p&gt;

&lt;p&gt;if success:&lt;br&gt;
    synapses["A→B"] = min(2.0, current * 1.1 + 0.05)  # LTP&lt;br&gt;
else:&lt;br&gt;
    synapses["A→B"] = max(0.1, current * 0.9 - 0.05)  # LTD&lt;br&gt;
The system literally rewires itself based on what works.&lt;/p&gt;

&lt;p&gt;And 14 More...&lt;/p&gt;

&lt;p&gt;Excitation/Inhibition: Every tool contributes positive or negative to every signal&lt;br&gt;
Sensitization: Anomalies boost sensitivity in related channels&lt;br&gt;
Reflexes: Budget exhaustion → immediate stop, no LLM call needed&lt;br&gt;
Proprioception: System tracks its own activity, cost, momentum&lt;br&gt;
Homeostasis: Overactive tools auto-inhibited, underactive tools boosted&lt;br&gt;
Lateral Inhibition: Tools compete; most relevant wins, rest suppressed&lt;br&gt;
Predictive Coding: Round 1 = prediction; Round 2 processes only surprises&lt;br&gt;
Neuromodulation: Dopamine (reward), norepinephrine (explore/exploit), acetylcholine (learning rate)&lt;br&gt;
Autonomic Modes: Anomaly → sympathetic (explore wide); Convergence → parasympathetic (integrate deep)&lt;br&gt;
Sleep/Consolidation: Between rounds, prune noise, strengthen successful paths, merge duplicates&lt;br&gt;
Distributed Control: Each tool has should_run() — decides locally, no central order&lt;br&gt;
Signal Summation: Multiple tools contribute to one signal over time&lt;br&gt;
Oscillation Groups: Tools grouped by rhythm (sensory/processing/integration/exploration)&lt;br&gt;
Stochastic Resonance: Controlled noise helps detect weak signals&lt;br&gt;
LangGraph has 0 of these. CrewAI has 0. AutoGPT has 0.&lt;/p&gt;

&lt;p&gt;The Picasso Bull Method&lt;br&gt;
The abstraction tool uses what I call the Picasso Bull method.&lt;/p&gt;

&lt;p&gt;Picasso drew a bull 11 times. The first was realistic. Each iteration, he removed details. The final drawing: a few lines. The essence of a bull.&lt;/p&gt;

&lt;p&gt;Round 1: 20 patterns found&lt;br&gt;
Round 2: Remove what's redundant → 12 patterns&lt;br&gt;
Round 3: Remove what's subsumed → 5 principles&lt;br&gt;
Feynman test: "Remove one more. Does it break?" → 3 core laws&lt;br&gt;
"Phenomena are complex. Laws are simple. Find out what to discard." — Feynman&lt;/p&gt;

&lt;p&gt;The system does this automatically. Multiple rounds of progressive elimination until only what's essential remains.&lt;/p&gt;

&lt;p&gt;What It Actually Does&lt;br&gt;
sparks run \&lt;br&gt;
  --goal "Find the core architecture principles" \&lt;br&gt;
  --data ./claude_code_analysis/ \&lt;br&gt;
  --depth standard&lt;br&gt;
I tested it on blog posts analyzing the Claude Code source leak. Three articles in, three architecture laws out:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Verification Time Decay Law (88%) "Cached state accuracy monotonically decreases after verification. Recomputation at high-risk boundaries is not waste — it's an accuracy premium."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Evidence &amp;gt; Declared Intent Law (93%) "A system's real priorities are encoded in resource allocation decisions. Implementation is truth, not documentation."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defense Incompatibility Law (82%) "Effective isolation requires each layer's failure modes to be unreachable from adjacent layers. The number of separated foundations — not security layers — is the real metric."&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Plus structural analogies to the Maginot Line and Chernobyl reactor design.&lt;/p&gt;

&lt;p&gt;Not bad for 3 text files and $0.50.&lt;/p&gt;

&lt;p&gt;The Architecture&lt;br&gt;
                         Goal + Data&lt;br&gt;
                             │&lt;br&gt;
╔════════════════════════════════════════════╗&lt;br&gt;
║  LAYER 0: NERVOUS SYSTEM                   ║&lt;br&gt;
║  17 biological principles                  ║&lt;br&gt;
║  Senses state. Doesn't command.            ║&lt;br&gt;
╠════════════════════════════════════════════╣&lt;br&gt;
║  LAYER 1: 13 THINKING TOOLS               ║&lt;br&gt;
║  Each runs autonomously.                   ║&lt;br&gt;
║  No fixed order. No conductor.             ║&lt;br&gt;
║  Tools sense shared state and decide.      ║&lt;br&gt;
╠════════════════════════════════════════════╣&lt;br&gt;
║  LAYER 2: AI AUGMENTATION                  ║&lt;br&gt;
║  Strategic forgetting                      ║&lt;br&gt;
║  Contradiction holding                     ║&lt;br&gt;
║  Predictive coding                         ║&lt;br&gt;
╚════════════════════════════════════════════╝&lt;br&gt;
                             │&lt;br&gt;
                 Core Principles + Evidence&lt;br&gt;
Being Honest: The Ablation Test&lt;br&gt;
I ran the same analysis with the nervous system ON and OFF.&lt;/p&gt;

&lt;p&gt;Metric  Nervous ON  Nervous OFF&lt;br&gt;
Confidence  65% 82%&lt;br&gt;
Coverage    75% 75%&lt;br&gt;
Cost    $2.28   $2.46&lt;br&gt;
On this small dataset, the simple pipeline produced slightly higher confidence scores. I'm publishing this because I think honesty matters more than marketing.&lt;/p&gt;

&lt;p&gt;But here's what I believe: the nervous system is an investment in long-term learning. Habituation, plasticity, consolidation — these compound over time. On 3 files and 2 rounds, you don't see it. On 50 files and 10 rounds, you would.&lt;/p&gt;

&lt;p&gt;Like a real nervous system — you don't notice it working until it's gone.&lt;/p&gt;

&lt;p&gt;The Part That Blows My Mind&lt;br&gt;
Every line of code was written by Claude Code.&lt;/p&gt;

&lt;p&gt;Not "I wrote most of it and Claude helped." Zero lines typed by a human. I directed. I decided. I said "this is wrong" and "this is right." But the actual code? All AI.&lt;/p&gt;

&lt;p&gt;22 design documents. 3,300 lines of Python. 17 biological principles. Working demo. One day.&lt;/p&gt;

&lt;p&gt;I think we're entering an era where what you think matters more than what you code. The 1999 book was right — the thinking tools are universal. They work for humans, and now they work for AI too.&lt;/p&gt;

&lt;p&gt;Try It&lt;br&gt;
git clone &lt;a href="https://github.com/PROVE1352/cognitive-sparks" rel="noopener noreferrer"&gt;https://github.com/PROVE1352/cognitive-sparks&lt;/a&gt;&lt;br&gt;
cd cognitive-sparks&lt;br&gt;
pip install -e .&lt;br&gt;
sparks run --goal "Find the core principles" --data ./demo/claude_code_posts/ --depth quick&lt;br&gt;
Works with Claude Code CLI (free with subscription) or Anthropic API.&lt;/p&gt;

&lt;p&gt;MIT License. Open source. Contributions welcome — especially the 6 unimplemented thinking tools.&lt;/p&gt;

&lt;p&gt;GitHub: PROVE1352/cognitive-sparks&lt;/p&gt;

&lt;p&gt;Built entirely with Claude Code. The framework that thinks about thinking — was thought into existence.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
