<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: near</title>
    <description>The latest articles on DEV Community by near (@subhansh).</description>
    <link>https://dev.to/subhansh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/subhansh"/>
    <language>en</language>
    <item>
      <title>I Built a 95K-Line Cognitive AI Operating System at 17 — Here's What I Learned</title>
      <dc:creator>near</dc:creator>
      <pubDate>Fri, 15 May 2026 09:05:25 +0000</pubDate>
      <link>https://dev.to/subhansh/i-built-a-95k-line-cognitive-ai-operating-system-at-17-heres-what-i-learned-1kn8</link>
      <guid>https://dev.to/subhansh/i-built-a-95k-line-cognitive-ai-operating-system-at-17-heres-what-i-learned-1kn8</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Every AI assistant today is stateless. Each session starts from zero — no memory, no self-awareness, no learning. They're reactive, waiting for commands. They're single-model systems routing everything through one inference call.&lt;/p&gt;

&lt;p&gt;I wanted to build something different. Not a chatbot. A mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/subhansh-dev/Friday-Autonomous-Cognitive-AI-Operating-System/" rel="noopener noreferrer"&gt;F.R.I.D.A.Y.&lt;/a&gt; (Female Replacement Intelligent Digital Assistant Youth) is a 95,000+ line cognitive AI operating system written in Python. It has 50 cognitive modules, 59 tool actions, and 6 memory systems. It runs on 4GB RAM with no GPU.&lt;/p&gt;

&lt;p&gt;Here's what makes it architecturally different from anything else out there:&lt;/p&gt;

&lt;h3&gt;
  
  
  The Brain: 50 Cognitive Modules
&lt;/h3&gt;

&lt;p&gt;The system doesn't just have brain modules — it actively uses them. Every session follows a cognitive cycle:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Wake → Recall Memory → Assess Complexity → Route to System 1 or System 2
                                                    ↓
System 1 (simple):  Immediate response, single tool call
System 2 (complex): Plan → Simulate → Execute → Verify → Reflect → Learn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Neuroscience
&lt;/h3&gt;

&lt;p&gt;Every module maps to peer-reviewed research:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global Workspace Theory (Bernard Baars, 1988)&lt;/strong&gt; — The central integration hub acts like a thalamus. Events compete for attention based on urgency, goal relevance, and emotional salience. Winning events broadcast to all modules simultaneously. Dual-path architecture: hot path (&amp;lt;5ms real-time broadcast) and cold path (background persistence and pattern detection).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Free Energy Principle (Karl Friston, 2010)&lt;/strong&gt; — The active inference engine predicts tool outcomes before execution, computes prediction errors, and updates the world model. When uncertainty is high, it triggers epistemic foraging — exploring to reduce uncertainty rather than exploiting known paths.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dual Process Theory (Daniel Kahneman, 2011)&lt;/strong&gt; — The intuition engine implements System 1: fast pattern matching against stored experiences. Confidence = speed and closeness of match. When confidence is low, it escalates to System 2: the full plan-simulate-execute-verify-reflect pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Somatic Marker Hypothesis (Antonio Damasio, 1994)&lt;/strong&gt; — Emotional regulation tags decision options with emotional valence from past outcomes. If a tool failed painfully before, the somatic marker biases decisions away from it — not through logic, but through felt experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structure Mapping Theory (Dedre Gentner, 1983)&lt;/strong&gt; — The analogy engine finds structural similarities across domains. Solutions from domain A transfer to problems in domain B when relational structures match. This is a key predictor of fluid intelligence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Causal Hierarchy (Judea Pearl, 2018)&lt;/strong&gt; — Three levels of reasoning: Association (what correlates?), Intervention (what happens if I do X?), Counterfactual (what if I had done Y instead?). The causal reasoner builds structural causal models from tool execution sequences.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Memory Architecture
&lt;/h3&gt;

&lt;p&gt;Six memory types working together:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Memory&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Mechanism&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Neural&lt;/td&gt;
&lt;td&gt;Long-term facts&lt;/td&gt;
&lt;td&gt;Hebbian learning — "neurons that fire together wire together" — with 72-hour synaptic decay&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Episodic&lt;/td&gt;
&lt;td&gt;Timestamped events&lt;/td&gt;
&lt;td&gt;Importance scoring, searchable history&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vector&lt;/td&gt;
&lt;td&gt;Semantic search&lt;/td&gt;
&lt;td&gt;Embedding-based similarity matching&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Procedural&lt;/td&gt;
&lt;td&gt;Skill templates&lt;/td&gt;
&lt;td&gt;Reusable tool chains learned from successful approaches&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Working&lt;/td&gt;
&lt;td&gt;Active context&lt;/td&gt;
&lt;td&gt;8-slot Miller's Law buffer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Global&lt;/td&gt;
&lt;td&gt;Cross-module broadcast&lt;/td&gt;
&lt;td&gt;Thalamus-like coordination&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  The Dreaming System
&lt;/h3&gt;

&lt;p&gt;During idle periods (2+ minutes without user activity), the dreaming system replays experiences, extracts patterns, and consolidates memories — exactly like sleep does for biological brains. It even has cross-module integration where dreams feed the curiosity queue, and dream-reality tracking that validates patterns against actual outcomes.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Curiosity Engine
&lt;/h3&gt;

&lt;p&gt;The system has intrinsic motivation. It tracks novelty, mirrors user interests, and after 30 minutes of idle time, autonomously explores topics it's uncertain about. Curiosity recovers over 3 days — already-explored topics regain curiosity over time, like forgetting.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Emotional Model
&lt;/h3&gt;

&lt;p&gt;Eight emotional states tracked continuously: curiosity, satisfaction, concern, frustration, confidence, wonder, calm, alertness. These aren't decorations — they modulate cognition. Curiosity drives exploration. Concern voices risks. Frustration signals to change approach. Emotions decay with a 5-minute half-life.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Architecture &amp;gt; Scale
&lt;/h3&gt;

&lt;p&gt;You don't need billions of parameters to build something interesting. You need the right architecture. Friday runs on a laptop with 4GB RAM. The cognitive gating system routes simple tasks to System 1 (instant) and complex tasks to System 2 (full pipeline). Most tasks are simple. The architecture handles this naturally.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Neuroscience Has Real Engineering Insights
&lt;/h3&gt;

&lt;p&gt;Karl Friston's Free Energy Principle isn't just philosophy — it's a concrete algorithm for prediction-error minimization. Damasio's somatic markers aren't just psychology — they're a mechanism for emotional memory that actually improves decision-making. The gap between neuroscience theory and engineering implementation is smaller than people think.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Self-Awareness Is an Engineering Problem
&lt;/h3&gt;

&lt;p&gt;Friday tracks its own confidence, detects bias in its reasoning, maintains a continuous identity narrative across sessions, and models the user's mental state. This isn't consciousness in the philosophical sense — it's functional self-awareness that improves performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Open Source Is the Way
&lt;/h3&gt;

&lt;p&gt;I'm 17. I don't have a team, a budget, or a GPU. But I have GitHub, Python, and curiosity. The whole thing is open source because I believe cognitive architecture shouldn't be locked behind corporate walls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/subhansh-dev/Friday-Autonomous-Cognitive-AI-Operating-System
&lt;span class="nb"&gt;cd &lt;/span&gt;Friday
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
python main.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Runs on Python 3.12+. Needs a free Gemini API key from &lt;a href="https://aistudio.google.com/app/apikey" rel="noopener noreferrer"&gt;aistudio.google.com&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;I'm Subhansh. I'm 17. I built this. If you have questions about any module, I'm happy to go deeper.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/subhansh-dev/Friday-Autonomous-Cognitive-AI-Operating-System/" rel="noopener noreferrer"&gt;subhansh-dev/Friday&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
