"Building code is only half the battle; maintaining it is the other half."
Working with AI agents in 2026 means code is generated faster than our human "architectural map" can often keep up. Last month, I noticed my project, Shortshub, was suffering from "architectural drift" because agents didn't have a clear boundary of where one feature ended and another began.
To solve this, I’ve moved away from standard monolithic structures and built a fully decoupled Control Plane (running on port 5004). Here’s the breakdown of my experimental "Fractal Kernel" approach. Check the video below.
The Problem: The "Hallucination Spread"
When an AI agent has too much context, it starts making "creative" (wrong) assumptions about global state. When it has too little, it breaks dependencies. I wanted a way to give agents exactly the context they need and nothing more.
Core Architectural Pillars
1. The Fractal Kernel Manifest (Experimental)
The foundation of the repo. Every feature lives in its own "cell" with a strict .manifest file.
How it works: The Kernel auto-discovers these at boot.
The Goal: It makes the codebase "Agent-Native." Instead of scanning 100 files, the agent reads one manifest to understand the "cell" boundaries. (Working may be 80%).
2. The Runtime Kill-Switch (Modular Isolation)
This is my favorite "safety" feature. Features are organized into toggleable Feature Cards.
The Value: If an AI-generated feature throws a hallucinated error in production, I don't have to roll back the whole build. I toggle that specific feature "OFF" from the Control Plane instantly. (Working 70%).
3. Debug Memory & Dependency Graphs
I’m attempting to log common agent errors into a dedicated panel to feed that "debug path" back into the next prompt.
Architecture Log: Working on a visual graph to show how Fractal cells connect (Not working currently).
Debug Memory: Useful about 50% of the time for preventing repetitive logic errors. (Working 50% of the time)
I’m building this using primarily Free-Tier LLM models. The goal is to see if Context Engineering (structuring the repo for the AI) can beat Model Raw Power.
Token Optimization: High. Agents only "see" relevant feature folders.
Speed: High. Features are built in isolation, then "plugged" into the Kernel but there are limits.
Disclaimer & Open Source
This is highly experimental "Vibe Coding" tempered by structural guardrails. I’m looking for feedback from anyone working on Multi-Agent Orchestration or Micro-frontend patterns.
Check out the demo website: www.shortshub.app
Poke the code on GitHub: Maqsood32595/fractal-kernel
Any feedback, interaction, suggestions are welcome.
Top comments (0)