I’ve been working on a new project called Connex AGI, a system designed to be more than just another chatbot. The goal is to build a "compiler for human intent"—a system that transforms nebulous user goals into structured, executable programs.
We are aiming for a biological cognitive model, integrating deliberative reasoning, perception, reflexes, and memory into a cohesive whole. I am opening up the repository for community review and would love feedback on the underlying architecture.
The Architecture:
A Biological Approach Connex AGI implements a multi-tier architecture that mimics biological systems. Instead of a single LLM loop, we split the cognitive load across specialized layers.
1. The Senses (Perception & Reflexes)
Before the "brain" even processes a request, we have the Perception Layer (Tier Peer). Using the Model Context Protocol (MCP), it gathers real-time data—like reading logs or analyzing video streams—to ground the AI in reality. Simultaneously, the Reflex Layer handles high-speed, unconditional responses. Like a nervous system, it executes pre-programmed plans (e.g., in response to a GitHub webhook) without waiting for the slower, expensive reasoning of the Planner
2. The Core Brain (Planner & Orchestrator)
This is where the heavy lifting happens:
• Tier 1: The Planner: Uses models like DeepSeek-R1 or GPT-o1 to decompose natural language goals into a Directed Acyclic Graph (DAG) of actions.
• Tier 2: The Orchestrator: Acts as the manager. It handles state management, routing outputs from one step to inputs of another, and self-correcting if a step fails.
3. Execution & Evolution
• Tier 3 (SkillDock): The modular worker layer where specific tools (web search, code execution) live.
• Tier 4 (Motivation): A self-improvement loop. The system reviews its own logs after execution. If it failed due to a missing capability, it autonomously generates and installs new skills.
• Tier 5 (World Layer): A "theory of physics" for the AGI. It uses a latent model to predict state transitions and verify if an action is physically possible
4. The Hive Mind (Registry)
Perhaps the most ambitious part is Tier 10: The Registry. This allows AGIs to share skills and reflexes. If your instance encounters a problem it can't solve, it can query the global registry to download the necessary "knowledge" learned by another AGI.
5: Memory (The Experience):
Connex AGI implements a dual-tier memory system to solve the "amnesia" problem common in LLMs.
• Short-Term (The Cache): A RAM-based volatility layer that holds the last 10 interactions, ensuring immediate dialogue flow without latency.
• Long-Term (The Archive): A persistent SQLite vector database. Instead of just searching by keyword, the system uses cosine similarity to find "top-match memories," allowing it to recall relevant context from months ago based on meaning rather than just dates.
• Experience Notes: To prevent data bloat, the system runs a daily summarization process, compressing raw logs into high-level "Experience Notes" that are easier for the Planner to recal
Request for Comments:
What I Need From You I am looking for critical feedback on the following:
1. Complexity vs. Utility: Is the 8-tier (or 10-tier) separation necessary, or could the Planner and Orchestrator be merged without losing reliability?
2. Latency: With separate layers for Perception, Planning, and execution, do you foresee major latency bottlenecks?
3. The World Layer: Is the concept of a "Latent Metaphysical Core" to verify actions practical in a software agent context?
Check out the Code The system is built primarily in Python (74%) and TypeScript.
• Repo: github.com/kanephamphu/connex-openagi
• Docs: See ARCHITECTURE.md and agi/SOUL.md for the ethical constitution.
I appreciate every star, fork, and code review!


Top comments (0)