Engineering Cognitive Iteration in OrKa 0.7.0
I Don't Want Consensus. I Want Conflict That Evolves
What if intelligence doesn’t live in answers, but in the iteration between disagreements?
That’s the premise. Not optimization. Not output. Not even explainability. The real goal is to observe the serial evolution of internal beliefs inside an artificial society. Not artificial intelligence artificial deliberation.
And for that, OrKa 0.7.0 is finally mature enough.
In this piece, I document the construction of an experiment where agents don't just act, they loop, they resist, they adapt, and eventually agree, but only if they choose to. It’s not consensus by design. It’s convergence through structured chaos.
I. Conceptual Core: Loop-Driven, Memory-Aware Agent Societies
Instead of stateless workflows, I want:
- Stateful agents with delayed memory
- A moderator that observes, suggests, never forces
- A traceable loop of cognitive revision
- Volitional stance shifts (or resistance)
Each iteration adds friction.
Each loop adds memory.
And eventually, a path of thought emerges.
This isn’t a flowchart. It’s synthetic introspection.
II. Agent Design: Roles Built for Friction
Agents:
-
logic_agent
: formal logic, rule-based, fact-first -
empathy_agent
: moral-based reasoning, focuses on harm or intent -
skeptic_agent
: default contrarian, refuses coherence if uncertainty exists -
historian_agent
: memory fetcher, returns past outputs from all agents with delay -
moderator_agent
: calculates agreement, synthesizes views, and proposes a convergence path
Behavior Attributes:
- Memory: per-agent episodic + shared short-term
- Latency: historian reads delayed memory only (e.g. from loop N-2)
- Autonomy: agents never required to agree
Each agent receives its own scoped prompt structure and selectively filtered memory.
III. Loop Mechanics: Iteration Until (Optional) Agreement
The orchestration architecture isn’t linear. It loops until one of two things happens:
-
agreement_score
>= threshold (e.g. 0.85) - max loop iterations hit (e.g. 7 rounds)
At each loop iteration:
- Input is sent to agents
- Agents write outputs → memory
- Moderator reviews all, calculates disagreement vector
- Moderator synthesizes and emits a suggestion
- Each agent receives the suggestion, decides to:
- Ignore it
- Modify their view
- Accept it
- New round begins with memory augmented by last round
This creates a serialized cognitive dialogue.
IV. Key Constructs
📦 agreement_score
Generated by comparing semantic and structural similarity across agent outputs.
🧭 moderator_suggestion
A synthetic framing that describes the consensus space:
"Two agents favor path A. Skeptic resists due to risk. Consensus leans toward caution."
🗣️ Agent Prompt Structure:
prompt: |
Your last stance: {{ memory.self_last }}
Moderator proposal: {{ memory.moderator_last }}
Do you adapt your view? Why or why not?
🧠 Memory Format:
{
"agent_id": "logic_agent",
"loop": 3,
"stance": "It's logically sound to proceed.",
"accept_moderator": false,
"reasons": "The suggestion sacrifices logical integrity."
}
V. What Emerges From Loops
This loop isn’t wasteful. It’s generative.
Over time, memory accretes and agents:
- Reference their own past
- Reject framing and escalate contradiction
- Or evolve their views subtly over time
Moderation isn't a convergence engine. It’s a suggestion artist.
The key? Autonomy.
You’ll start seeing:
- Divergent clusters that collapse
- Rebellious agents that hold position until others shift
- Shifts in tone, not just content
Eventually, a story appears. A thoughtline.
VI. Empirical Observations
Document actual belief trajectories you observe:
- Resistant clusters that eventually collapse
- Agents that maintain position until others shift
- Emergence of unexpected consensus points
The actual "thoughtlines" that emerge. This approach leverages OrKa's maturity while pioneering artificial deliberation. You're essentially building a belief state machine with memory persistence, which is exactly what AGI research needs more of.
Ready to start building this cognitive iteration system?
VII. How I’m Building It in OrKa 0.7.0
✅ Already Available:
-
fork_group
execution -
MemoryNode
+MemoryWriterNode
- Redis memory backend
- Manual loop controller outside orchestrator
🛠️ What I Added:
- Per-agent loop counter
- Loop-level memory tagging
- External loop runner to manage orchestration restart
- Agreement scorer as a
ModeratorAgent
What’s Missing (for now):
- Native loop orchestration (planned for v0.8.0)
- Dynamic prompt diffing (manual patch)
- Agent trust dynamics (future feature)
But even with this setup, I can already observe belief evolution across time.
Explore → https://github.com/marcosomma/orka-reasoning
Top comments (0)