DEV Community

Jason (AKA SEM)
Jason (AKA SEM)

Posted on • Originally published at Medium on

The Three-Axes Hypothesis Is Incomplete. Here’s What’s Actually Missing From the Agent Wars.

Everyone’s mapping agents on infrastructure. The real fight is memory, cognition, and governance.

Frontier Operations Series — Jason Brashear

https://jasonbrashear.substack.com

Every major AI agent launch in 2026 gets the same treatment. Horse race coverage. Security panic. “Is this the next OpenClaw?” rinse and repeat.

Nate Herkens cut through that noise recently with something genuinely useful: a three-axis framework for evaluating any agent product. Where does it run. Who orchestrates the intelligence. What’s the interface contract. Three questions. Apply them to any launch. Get a clear answer on whether it matters to you.

It’s the best framework anyone has published on the agent wars so far. And it’s incomplete.

Not wrong. Incomplete. In a way that hides the most important plays being made right now.

The Three Axes: A Quick Recap

Nate’s framework maps five major players across three dimensions:

Axis 1 — Where does your agent run? Local, cloud, or hybrid. This determines data privacy posture, security surface area, and who’s responsible when the agent deletes your inbox.

Axis 2 — Who orchestrates the intelligence? Single model, multi-model with a routing harness, or model-agnostic plug-your-own. This determines cost, quality ceiling, and vendor lock-in.

Axis 3 — What’s the interface contract? Messaging app you already use, dedicated desktop app, phone, or something custom. This determines whether you’ll actually use the thing.

Apply these three questions and the landscape sorts itself fast:

  • OpenClaw occupies the top-right corner. Maximum control, maximum complexity, maximum risk. Sovereignty play. 250,000 GitHub stars. For developers who want to wire everything themselves.
  • Perplexity Computer sits bottom-left. Minimum complexity, minimum control. Delegation play at $200/month. Describe the outcome and walk away.
  • Manus (Meta) lands in the middle. Distribution play. Capture eyeball-hours inside the Meta ecosystem. Consumer-scale, trust-Zuck-with-your-data pricing.
  • Anthropic Dispatch is the safety play. Single-threaded Claude from your phone to your desktop. Assumes you’re a Claude superfan. Low complexity, moderate control.
  • Lovable is the pivot play. $300M ARR vibe-coding tool now expanding into general-purpose agent execution. Low complexity, high user control within its domain.

This is genuinely clarifying. If all you need is to pick one of these five products, the framework works.

But if you’re building, if you’re an operator trying to understand where the category is actually going, three axes aren’t enough.

What the Framework Misses

The three-axis model treats agents as stateless tools. Where they run. What model they use. How you talk to them. These are infrastructure questions. Important ones. But they assume something that isn’t true: that all agents are equally disposable between sessions.

They’re not. And the products that understand this are making bets the three-axis framework can’t even see.

Here’s what’s missing.

Axis 4: Memory and Continuity

Does your agent know who you are?

Not “can it read your prompt history.” Does it maintain persistent episodic memory across every interaction? Does it track entities — people, projects, decisions — and build a model of your world that compounds over time?

OpenClaw doesn’t remember your last session. Perplexity Computer doesn’t build a long-term model of you. Manus resets. Dispatch is single-threaded. None of these products have memory as a first-class architectural primitive.

This matters because the value of an agent that knows you for six months is categorically different from an agent you have to re-brief every session. It’s the difference between a new contractor and a chief of staff. Same interface. Same models. Completely different utility.

Memory isn’t a feature. It’s an axis. And it’s the axis that determines whether your agent gets more valuable over time or stays flat.

Axis 5: Autonomous Cognition

Does your agent think when you’re not talking to it?

Every product Nate profiled is reactive. You prompt, it responds. You describe an outcome, it decomposes and executes. The agent does nothing when idle. It has no curiosity. It doesn’t reflect on what it’s learned. It doesn’t generate its own hypotheses about what you might need next.

This is a fundamental architectural assumption that almost nobody is questioning: that agents should be event-driven, not continuously cognitive.

But what if your agent ran a contemplation loop. What if it revisited its own performance, extracted patterns, consolidated lessons, and surfaced insights you didn’t ask for? What if it had a self-improving system that got measurably better at serving you without requiring you to do anything?

That’s not a feature on top of a reactive agent. It’s a different kind of agent entirely. And it opens a design space that none of the current players are operating in.

Axis 6: Governance as a Product Layer

Nate frames security as a problem. OpenClaw has 30,000 exposed instances. The skills registry got hit with a supply chain attack. Researchers are worried. Fair enough.

But the response to “security is a problem” shouldn’t be “trust us, we’ll handle it” (Perplexity) or “good luck, here are the Lego bricks” (OpenClaw). There’s a third option: make governance visible, auditable, and configurable as a first-class product layer.

What does that look like? Intent governance with hierarchical policies — global rules, department rules, agent-level rules — that inherit monotonically so nothing slips through. Execution approvals that gate autonomous operations. Heartbeat contracts with periodic scoring so you know your agent is actually doing what it promised. Knowledge ACLs so your data doesn’t leak between contexts.

This isn’t patching security holes. This is building a trust architecture. And it directly answers the question Nate himself posed as the defining question of 2026: how do we delegate agentic trust?

You delegate it through visibility. Through auditable intent. Through governance that the operator can inspect without needing to read source code.

The Real Graph Is Six Dimensions

If you map the agent landscape across all six axes — runtime, orchestration, interface, memory, cognition, and governance — the picture changes dramatically.

The five players Nate profiled are all competing on the first three axes. They’re fighting over infrastructure positioning. That fight is real and it matters.

But the next wave of agent products won’t win on infrastructure. They’ll win on the axes that make agents feel like persistent, trustworthy, self-improving collaborators rather than disposable tools you re-brief every morning.

Memory is the axis that creates compounding value. Cognition is the axis that creates proactive utility. Governance is the axis that creates institutional trust.

Infrastructure is table stakes. These three are the moat.

Where ArgentOS Fits

I’ll be direct about the fact that I’m building in this space. ArgentOS is an intent-native AI operating system and we’re about to open-source the core.

On Nate’s original three axes, ArgentOS sits in the sovereignty quadrant — self-hosted, your hardware, no cloud required. But it’s not a framework. It’s a working operating system with 18 specialized agents, smart model routing across 15+ providers with complexity scoring and cross-provider fallback, and seven messaging channels out of the box.

On the three axes Nate didn’t cover, ArgentOS is making explicit bets:

Memory: 12,500+ lines of persistent memory architecture. Hybrid SQLite FTS5 and pgvector search. Episodic memory with entity tracking, embeddings, and auto-capture. Your agent builds a compounding model of your world across every interaction.

Cognition: A consciousness kernel with autonomous curiosity threads, a contemplation loop that runs every 30 minutes when idle, and a self-improving system (SIS) that extracts lessons from episodes, consolidates patterns, and evaluates its own performance.

Governance: Three-tier hierarchical intent governance with monotonic policy inheritance, execution approvals and command gating, heartbeat contracts with periodic scoring, and knowledge ACLs with collection-level permissions.

The open-source core includes memory, cognition, model routing, multi-channel messaging, and 50+ agent tools. The business tier adds intent governance, execution approvals, autonomous worker capabilities, SpecForge project management, knowledge ACLs, and accountability scoring.

That’s the open-core model. Community gets the architectural foundation. Operators and SMBs who need governance and visibility pay for the trust layer.

Two Audiences Nobody Is Serving

Here’s the gap in the market that Nate’s framework reveals once you extend it:

The operator who wants a personal AI. Not a chatbot. Not a tool they re-prompt every day. A persistent, self-improving agent that runs on their hardware, knows their world, thinks autonomously, and gets better over time. OpenClaw gives them Lego bricks. Perplexity gives them a black box. Neither gives them a working personal AI operating system.

The SMB that needs real AI automation. Not enterprise-priced platforms they can’t afford. Not raw open-source frameworks they don’t have engineering teams to deploy. A governance-first agent system with intent routing, approval workflows, and accountability that a business owner can understand and trust. This audience literally does not have a product right now.

These two audiences will define the next phase of the agent wars because they represent the vast middle of the market that none of the current players are designed to serve.

The Category Is Expanding Whether You’re Ready or Not

Nate is right that 2026 is the year of agentic trust delegation. He’s right that the products surviving compression will either go deep enough to be irreplaceable or go broad enough to become default delegation layers. He’s right that the middle is where you go to die.

Where I’d push further: the category itself is expanding beyond the three dimensions everyone is competing on. Runtime, orchestration, and interface are infrastructure problems. They’ll commoditize. Some of them already have.

The axes that create durable differentiation are memory, cognition, and governance. Products that treat these as first-class architectural primitives — not features bolted on after the fact — are making a fundamentally different bet on what agents become.

Agents that remember. Agents that think. Agents you can trust because you can see exactly what they’re doing and why.

That’s not a me-too. That’s a category expansion.

The core goes open source this week. Come build on it.

GitHub: github.com/ArgentAIOS/core Site: argentos.ai Discord: discord.gg/argentos

Jason Brashear is the creator of ArgentOSand a partner at Titanium Computing. He’s been building software since 1994 and has spent the last two years building multi-agent AI systems. This is part of the Frontier Operations Series on intent engineering, organizational memory, and convergent agent architecture.

Top comments (0)