DEV Community

Cover image for 🧠 I Didn't Know Where It Was Going. I Just Kept Going.
Mak Sò
Mak Sò

Posted on

🧠 I Didn't Know Where It Was Going. I Just Kept Going.

I didn’t build orka-reasoning with a final goal in mind. There was no pitch deck. No spec. No clean architectural blueprint.

Just a rough feeling.
That something was off in how we work with AI.
That most “agent frameworks” were just 🩹 prompt glue. And that actual cognition needed orchestration.

So I wrote a node. Then another. Then a very dumb orchestrator. And… things started to work. Not well. Not consistently. But honestly. The agents flowed. The memory kicked in. The logic started to feel... alive.

Every time I touched it, I went deeper.What started as a Redis-agent experiment spiraled into something more serious. I stopped steering and started following.
Like watching ants find their path.


🌀 There Was No Plan. Just Emergence.

The fork node? Neat. The join node? Tricky.

But the loop node?
That’s when it all clicked. It cracked the system open. Showed me what this really was:

Not a pipeline.
Not a workflow.
But a recursive, branching, self-aware cognition graph.

Suddenly agents weren’t just executors. They were perspectives. And the system wasn’t just sequencing them. It was reflecting on them.

🔁 Revisiting
🧪 Correcting
🧠 Adapting
🚀 In loop.

The first time I watched a loop agent self-correct mid-run, I froze. That’s when I realized:

OrKa wasn’t a framework.
It was a substrate for reasoning.


🧱 I Didn’t Architect This. I Discovered It.

When people ask what OrKa is, I still hesitate. Because I didn’t build it in the traditional sense.

I uncovered it.

  • Buried in YAMLs.
  • Hidden in Redis logs.
  • Hinted at by every “wrong” decision that turned out to be essential.

It only started making sense after it started working. And ironically, the more I tried to modularize it, the more it came alive.

Cognition doesn’t care about modules.
It cares about feedback.
About loops.


🔄 OrKa-Reasoning Wasn’t Supposed To Happen.

I had other things to do. Roadmaps. Deadlines. Real work™. But this rough idea wouldn’t let go.

What if AI flows weren’t linear? What if they were topologies? Stateful, revisitable, dynamically traversable maps?

And what if agents didn’t just execute…but chose their path, based on context, confidence, and memory?

So I built:

🧭 a router node
📊 confidence-weighted routing
🔁 loop detection
🧠 scoped memory
📄 YAML-defined execution graphs
🪵 full Redis/Kafka trace logging

Every “messy” idea became core.


🚫 Final Scope? Still Don’t Have One.

Where’s OrKa going? No idea!

But here’s what I know now:
1 - It’s not about “chatbot agents.”
2 - It’s about cognitive substrates.

Systems where reasoning is:

✅ observable
🧠 memory-aware
🌿 branching
🔁 recursive
🧬 and fundamentally emergent

  • No black box.
  • No UX sugar.
  • No prompt chains.

Just raw, explainable cognition. And it all started with a dumb loop.


🔍 What Makes OrKa Different?

Let’s break it down technically:

🌐 Execution Strategy:
Everything runs as a graph. You can fork paths, join them, or let routers decide.

📈 Confidence Weighting:
Each agent emits a probability distribution of where to go next.

🧠 Memory Layers:
Scoped memory with TTL and decay. Inspired by cognitive architectures.

🧩 Service Nodes:
Non-agent logic transformers (RAG, memory writers, embedding fetchers).

🛠️ Observability:
Every run is logged in Redis Streams. Nothing happens without traceability.

🧭 Router Agents:
They route based on output structure and score, not hardcoded rules.

This isn’t glue. This is infrastructure.


💬 Realizations Along the Way

Each time I implemented something ugly, it turned out to be fundamental.

🚫 Forks without proper targets? Became dynamic fork groups.
🚫 Join logic breaking? Led to modular merging strategies.
🚫 Memory pollution? Pushed me to scope by node and TTL.
🚫 No clear loop closure? I wrote an execution clock.

None of this was planned. All of it emerged.


🛣️ Where It’s Going

Right now?
OrKa is becoming the cognitive kernel for something deeper:

🧠 Language agent systems that aren’t chains
🔎 Interpretable cognition maps
📊 Visual trace replays (via OrKaUI)
🔗 Real-time looped reasoning with memory decay
📤 Deployable YAML flows as reproducible logic graphs

It’s weird. It’s messy. It works.


💻 Curious?

The evolving repo: → https://github.com/marcosomma/orka-reasoning

Top comments (3)

Collapse
 
anchildress1 profile image
Ashley Childress

This is such a cool idea 🌱 and your breakdowns are both impressive and readable — rare combo! Your repo’s already on my stalk list 👀 and if you ever need a rubber duck, I’ve got wall space and no shortage of opinions 😇

Really curious to see where this goes!

Collapse
 
marcosomma profile image
Mak Sò

🙏 Thanks! that means a lot. I’ve been building this mostly in the dark, so hearing it’s readable and worth stalking is strangely validating 😅

Definitely might take you up on the duck offer. Opinions are gold when you’re knee-deep in cognitive graphs and wondering if you’re just reinventing spaghetti with extra steps. This thing’s unfolding fast, not sure where it lands, but I’m following it wherever it goes.

Let’s keep orbit. 🧠🔁

Collapse
 
anchildress1 profile image
Ashley Childress

Reach out any time! Happy to be a sounding board 😀