DEV Community

Cover image for 🧠Building neuro-symbolic AI Alone... Help is welcome
Mak Sò
Mak Sò

Posted on

🧠Building neuro-symbolic AI Alone... Help is welcome

Hi DEV,

I'm Marco. Four months ago I started building OrKa, a modular, YAML-defined orchestration engine for agentic reasoning. Today it’s not just a prototype, it’s a functional system with live memory, explainable flows, and a full visual builder. It’s also way too big for one person to maintain.

I’m doing this solo. One brain, one laptop, one repo. OrKa has grown into a cognitive infrastructure stack: structured memory layers, Redis/Kafka queues, traceable agent chains, a visual UI, and benchmarked orchestration loops. Version 0.8.0 is out. It works. But it’s a grind, and I need help.


Why OrKa Exists: To Kill Black-Box AI

LangChain, Flowise, AutoGen, they’re building chains and calling it cognition. But try tracing memory across agents, understanding why a step was taken, or observing reasoning in real time, you can’t.

OrKa is different.

  • Modular agents with independent logic and memory
  • YAML-defined cognition flows, not spaghetti scripts
  • Live Redis/Kafka trace logging for every agent decision
  • Observable UI + terminal-based TUI
  • 6-layer memory model, with decay and scoped storage
  • Confidence-weighted routing to simulate dynamic reasoning

This isn’t ā€œprompt chaining.ā€ It’s cognitive orchestration, explainable, testable, and local.


What Works in OrKa 0.8.0

  • āœ… Fork/Join agent execution with dynamic paths
  • āœ… Confidence scores per agent + agreement synthesis
  • āœ… Redis and Kafka-compatible logging
  • āœ… Visual OrKaUI with YAML sync
  • āœ… Full local + API runtime support
  • āœ… ServiceNodes (RAG, MemoryWriter, Embedding Fetchers)
  • āœ… Benchmarked: 1000 loops, 7.6s avg latency, 0.00011Ā¢ per run on DeepSeek 32B

But Here’s What Still Sucks

This is real open source. Here’s where I’m drowning:

  • 🧠 Memory Scope: Working, but v0.8.0 needs a simpler Redis GET/SET fallback
  • šŸ’€ Code Bloat: The Orchestrator class is still too big. We need clean modular separation.
  • 🧩 UI Gaps: Memory nodes are stubbed, not visual. Trace replay only half-works.
  • šŸ“„ Docs: The readme’s outdated. The guides are scattered. /examples is growing but still thin.
  • šŸ“£ Awareness: No Reddit, no X. ~200 GitHub stars, ~10 PyPI installs/day.
  • šŸ§ā€ā™‚ļø Solo Dev Hell: I’m writing infra, fixing YAML bugs, fielding Discord questions, and raising 3 kids.

This project won't survive on vibes.


What You Can Do (Today)

🧠 Build SimpleMemory

  • File: /src/memory/simple.py
  • Use Redis SET/GET, drop Kafka overhead
  • Improve fallback logic for small LLMs

šŸ” Clean the Codebase

  • Break /src/orchestrator.py into smaller modules
  • Add ruff, mypy, enforce stricter lint
  • Improve logging separation by agent vs. orchestrator

🧱 Contribute Nodes or Agents

  • Add a PlannerAgent
  • Extend RAGNode with Pinecone/Chroma support
  • Add MemoryVisualizerNode to the UI

āœļø Write Real Docs

  • Add /docs/architecture.md
  • Polish /examples/fact_checker.yaml
  • Write /quickstart.md for non-engineers

šŸ›° Spread It

  • Post your workflow on X, Reddit, Discord
  • Share a custom YAML orchestration
  • Fork it and use it in a hackathon

Why This Matters

AI shouldn’t be a black box. If it can’t be traced, it shouldn’t be trusted.

OrKa is an attempt to build cognitive systems that are:

  • Transparent
  • Local-first
  • Composable
  • Deterministic

It’s not a LangChain clone. It’s not a chatbot wrapper. It’s a runtime for modular cognition. You define the graph, OrKa executes it, traces it, and explains it.

But I can’t do this alone.


The Ask

Don’t star the repo, run it.

pip install orka-reasoning
Enter fullscreen mode Exit fullscreen mode
orchestrator:
  id: fact_checker
  strategy: sequential
agents:
  - id: validator
    type: binary
    prompt: Is this statement factual?
Enter fullscreen mode Exit fullscreen mode
from orka import Orchestrator
orc = Orchestrator("fact_checker.yaml")
orc.run({"input": "The moon is made of cheese"})
Enter fullscreen mode Exit fullscreen mode

Want to fix docs? Build a node? Just hang out?

I’m in the server. Ping me. Fork the repo.

Let’s build the cognitive runtime we all wish existed.


Repo: https://github.com/marcosomma/orka-reasoning

Top comments (4)

Collapse
 
anchildress1 profile image
Ashley Childress

That’s a big step — taking something you’ve probably been thinking about for weeks before touching code and then tossing it out into the world? I’m not there yet with mine, but I’m cheering you on! šŸŽ‰

I’d love to help, but I’ve got a personal project on a deadline (and I’m about two weeks behind writing any actual code for it 🤣). In the meantime, I’ve got some nifty Copilot architecture instructions if you want high-level docs. I can fork the repo and run them through my account — it’s automated, so you’d need to review, but they’re usually solid for the kind of high-level magic they’re meant to be.

If you’re using Copilot (or any other AI), I can send you the link to all my random things — some brand new, some stuff I’ve been using for a while. There’s a chat mode that helps write out instructions (works best with insider knowledge, otherwise I’d grab those for you too). That one and my logging mode are barely tested, but hey — nothing’s exploded yet. 🤣

I did set a reminder to check back in a couple months. If you’re still looking for help then, I’ve been itching for an excuse to dust off my circa-2.7 Python skills anyway. šŸāœØ Until then, keep us posted — hopefully you’ve got a small army of volunteers by tomorrow and can just watch the magic happen. 🫶

Collapse
 
marcosomma profile image
Mak Sò

@anchildress1 just merged your docs!
THANKS!!!!! 🫶

Collapse
 
mmhqs profile image
Mariana

What's up, Marcos? I understood very little of what you said hahahah but I'm willing to help with docs or something very beginner-like.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.