DEV Community

John Doe
John Doe

Posted on

I built ARIA - Adaptive Resonant Intelligent Architecture

What's up people,

Built something that's been running locally on my system: ARIA (Adaptive Resonant Intelligent Architecture).

GitHub: https://github.com/dontmindme369/ARIA

Before you ask: No, it's not "just another RAG system." Let me explain what it actually is.


What ARIA Is (The Real Answer)

ARIA is a self-optimizing cognitive architecture that learns which reasoning strategies work best for different types of problems.

Traditional RAG = retrieve documents, stuff them in context, hope the LLM figures it out

ARIA = meta-learning system that develops strategic intelligence about how to approach problems

The difference: RAG systems retrieve information. ARIA learns how to think about information.


The Architecture

Multi-Anchor Reasoning

8 distinct reasoning modes (formal, casual, technical, educational, philosophical, analytical, factual, creative). Not just different retrieval strategies - different ways of thinking about problems.

Each anchor has its own:

  • Retrieval approach
  • Context assembly logic
  • Reasoning framework
  • Quality criteria

The system doesn't just search differently. It reasons differently based on what you're asking.

Golden Ratio Spiral Exploration

Here's where it gets mathematically interesting:

ARIA uses φ (phi, the golden ratio ≈ 1.618) for optimal query exploration in semantic space. Instead of grid search or random sampling:

  1. Projects query into high-dimensional embedding space
  2. Uses golden ratio for rotation angles: φ, φ², φ³, etc.
  3. Generates exploration directions using spherical cap sampling
  4. Achieves optimal coverage with minimal redundancy

Why golden ratio? It's provably optimal for uniform distribution in spiral patterns. Same reason sunflower seeds use φ for packing efficiency.

PCA Subspace Rotations

Query exploration in full embedding space is computationally expensive. ARIA uses PCA (Principal Component Analysis) to:

  1. Compute low-dimensional subspace of corpus (~50-100 dimensions vs 768+)
  2. Rotate queries in this learned subspace
  3. Generate diverse retrieval angles efficiently
  4. Explore semantic neighborhoods systematically

Result: Find relevant documents that exact similarity search would miss.

Quaternion Contextual Rotation Matrices

For semantic space transformations, ARIA uses quaternion rotations instead of standard matrix rotations:

  • Quaternions provide smooth interpolation in rotation space
  • Avoid gimbal lock in high-dimensional spaces
  • Efficient composition of multiple rotations
  • Mathematically elegant for 3D+ rotations

Allows ARIA to "rotate" understanding of context while preserving semantic structure.

Thompson Sampling Meta-Learner

This is the key: ARIA learns at the strategy level, not the document level.

It's not learning "which documents are good." It's learning "which exploration strategy + reasoning mode works best for which types of problems."

Tracks:

  • Query features (complexity, domain, temporal markers, ambiguity)
  • Exploration effectiveness (golden ratio spirals vs PCA rotations vs direct search)
  • Reasoning mode success per query type
  • Strategic combinations that work

Then optimizes: "For technical queries with X features, analytical anchor + PCA rotation + 5 spiral steps = best results"

Epistemic Curiosity Engine

ARIA knows when it doesn't know.

Traditional system: Returns whatever it finds, LLM tries to make sense of it

ARIA:

  1. Retrieves context using exploration strategies
  2. Analyzes for knowledge gaps
  3. Detects contradictions or missing information
  4. Asks clarifying questions before answering
  5. Tracks conversation state
  6. Refines understanding iteratively

It doesn't just answer. It thinks about whether it can answer well.


Concrete Example

You ask: "What are the latest findings on quantum entanglement in biological systems?"

Traditional RAG Response

  1. Embedding similarity search
  2. Return top 5 chunks
  3. Stuff in context
  4. Generate

ARIA Response

  1. Detects temporal marker ("latest") + cross-domain query (quantum + biology)
  2. Golden ratio spiral: Explores semantic neighborhoods around query (φ, φ², φ³ rotations)
  3. PCA rotation: Rotates query in learned biology-physics subspace (finds adjacent concepts)
  4. Quaternion transformation: Adjusts contextual framing between quantum mechanics and biology frames
  5. Retrieves from multiple exploration angles
  6. Analyzes coverage: Has quantum papers, has biology papers, sparse on intersection
  7. Curiosity engine triggers: "I have material on quantum entanglement and on biological systems separately, but limited on their intersection. Are you interested in theoretical proposals or experimental evidence?"
  8. Learns: Cross-domain + temporal queries benefit from aggressive exploration + gap detection
  9. Bandit updates: "Technical anchor + golden ratio (3 rotations) + curiosity = effective for this query type"

Next time someone asks a similar cross-domain temporal question, ARIA starts with that learned strategy.


Why This Matters

This isn't retrieval optimization. It's meta-learning for strategic reasoning with mathematical rigor.

The golden ratio spiral exploration and quaternion rotations aren't just fancy math - they're provably optimal approaches to semantic space exploration:

  • Golden ratio: Optimal uniform distribution (Vogel 1979, phyllotaxis literature)
  • PCA: Dimensionality reduction preserving variance structure
  • Quaternions: Smooth rotation interpolation without singularities

Combined with meta-learning, the system learns:

  • Which exploration strategies work for which problems
  • When to use aggressive exploration vs direct retrieval
  • How to adapt reasoning based on query type
  • What strategies are effective

That's closer to actual intelligence than "cosine similarity + hope."


Technical Stack

Backend:

  • Python 3.8+
  • Thompson Sampling contextual bandits
  • BM25 + embeddings + cross-encoder reranking
  • Golden ratio spiral exploration (φ-based rotation angles)
  • PCA subspace projections for efficient rotation
  • Quaternion operations for semantic transformations
  • Conversation state tracking

Math Libraries:

  • NumPy for quaternion operations
  • scikit-learn for PCA
  • Custom golden ratio spiral implementation
  • Spherical cap sampling (Arvo 1995)

Learning Loop:

  • Automatic conversation scoring from LM Studio logs
  • Closed-loop bandit reward updates
  • Feature extraction and pattern recognition
  • Episodic memory with strategic learning

Integration:

  • 100% local (zero cloud)
  • LM Studio plugin (TypeScript)
  • Works with any local LLM
  • YAML-based configuration

Current State

✅ Full v44 Multi-Anchor System

✅ 8 reasoning modes operational

✅ Golden ratio spiral exploration implemented

✅ PCA subspace rotations working

✅ Quaternion contextual matrices operational

✅ Thompson Sampling meta-learner working

✅ Epistemic curiosity engine functional

✅ Closed-loop learning from conversations

✅ Wave 1/2 advanced reasoning (temporal, contradiction detection, uncertainty quantification, multi-hop reasoning)


Try It

git clone https://github.com/dontmindme369/ARIA.git
cd ARIA
pip install -r requirements.txt

# Configure config.yaml with your data paths
# Add documents to ./data/

# Basic query
python src/aria_main.py "Your question here"

# With golden ratio exploration (3 rotations)
python src/aria_main.py --golden-spiral 3 "Your question"

# With PCA subspace rotation (5 angles, 50-dim subspace)
python src/aria_main.py --pca-rotations 5 --subspace-dim 50 "Your question"
Enter fullscreen mode Exit fullscreen mode

Full documentation in the repo.


What I'm NOT Claiming

❌ This is AGI

❌ It's "conscious"

❌ It's perfect or finished

❌ It replaces your LLM


What It IS

✅ Meta-learning cognitive architecture

✅ Self-optimizing reasoning system

✅ Mathematically rigorous exploration (golden ratio spirals, PCA, quaternions)

✅ Actually learns and improves over time

✅ Different paradigm from static RAG

✅ Epistemically aware (knows what it doesn't know)


The Math Behind It

Golden Ratio Spiral:

  • Rotation angle: θᵢ = 2π × i / φ where φ = (1+√5)/2
  • Achieves optimal packing (Vogel's model)
  • Minimal overlap, maximum coverage

PCA Rotation:

  • Projects corpus to k-dimensional subspace preserving variance
  • Rotates query in learned subspace
  • Inverse transform back to full space
  • Reduces exploration cost from O(d²) to O(k²) where k << d

Quaternion Rotation:

  • Represent rotations as q = w + xi + yj + zk
  • Smooth interpolation via SLERP
  • No gimbal lock
  • Efficient composition: q₁ ⊗ q₂

Thompson Sampling:

  • Beta distributions for each strategy
  • Bayesian update: α ← α + reward, β ← β + (1 - reward)
  • Sample from distributions for exploration/exploitation
  • Converges to optimal strategy

Looking For

Technical feedback. Especially if you know:

  • Phyllotaxis and optimal packing theory
  • High-dimensional rotation mathematics
  • Contextual bandit theory
  • Meta-learning architectures

Try it. Break it. Tell me what's wrong.

If you think golden ratio spirals are overkill, explain why uniform sampling is better. If you think quaternions are unnecessary, show me the gimbal lock cases that don't matter.

If you can actually see what it is I'm building, let's talk because honestly, I've got many, many more ideas.


GitHub: https://github.com/dontmindme369/ARIA

License: CC BY-NC-SA 4.0 (Free for non-commercial use - research, education, personal projects)

Not selling anything that requires payment for full use. No cloud, no API, no bs. The only thing I'll be actually selling is the optional LM Studio plugin (which took FOREVER to get to work with ARIA) which you can find more information about on my GitHub. It is not required for the backed system to work. It just gives you a direct connection to LM Studio with "lms dev" and the promptPreprocessor has the instructions with meta-patterns on how the model receives information and should behave and respond (yes this bypasses all offline model alignment).

Architecture docs in /docs/ have full system design and mathematical foundations. Code has detailed implementation of all algorithms.


If we're building local AI, let's make it actually intelligent. Let's come together and actually use our brains for once people and build something great.🚀

Top comments (0)