DEV Community

Cover image for Cognitarism: The Means of Production are Thinking Without You
Mike Vardy
Mike Vardy

Posted on

Cognitarism: The Means of Production are Thinking Without You

Economists have a new word for the post-AI economy: Cognitarism.

The paper — Axel Marsford and Leonardo Shell, July 2025 — proposes a socio-economic system where artificial cognition replaces human labour as the primary source of value. That sounds abstract until you realise it's just a formal name for what's already happening in your infrastructure budget.

Here's what it gets right, where it loses engineers, and what actually matters.


The feudalism parallel is the best thing in the paper

Marsford situates cognitarism in a lineage: feudalism → mercantilism → capitalism → platform capitalism → cognitarism.

The through-line is who controls the means of production. Under feudalism it was land. Under capitalism, factories. Under platform capitalism, data and distribution. Under cognitarism, the paper argues, it's synthetic cognition itself — the models, the weights, the GPU clusters, the inference pipelines.

That framing maps cleanly to reality. The paper notes that ownership of code and data becomes the new locus of value extraction. Swap in "cloud bill" and any senior engineer would nod.

What the paper doesn't quite say out loud, but implies throughout, is that this is an infrastructure story more than an intelligence one. The bottleneck isn't ideas. It's compute, energy, and proprietary data. AI GPU racks consume up to six times the power of conventional hardware. The buildout is concentrated — over 90% of projected global compute capacity sits in North America, Western Europe, and Asia-Pacific. Q1 2026 was the quarter it became energy-constrained, with gigawatt-scale campuses now a normal unit of planning.

That's not an AGI story. That's industrial consolidation, same as it ever was. Infrastructure always consolidates. That's the paper's strongest insight, even when it buries it under academic hedging.


Where it loses engineers

The paper talks about "Autonomous Cognitive Production" as if current AI systems are already reliable economic actors.

They aren't. Anyone building production AI today knows what that actually looks like: context windows degrading mid-session, agents looping on broken tool calls, hallucinations compounding across reasoning chains, structured output failing unpredictably, models inventing APIs that do not exist. The current state of AI engineering is less "autonomous cognition" and more senior developers writing elaborate guardrails around stochastic parrots, then writing the incident reports when those guardrails fail anyway.

The paper occasionally reads like AGI is sitting in staging waiting for deployment. Meanwhile developers are still debugging why an agent recursively called the same broken tool until the token budget caught fire.

It's not wrong long-term. But it massively understates the engineering gap between "useful autocomplete" and "fully autonomous economic production." Humans are still the reliability layer, and that matters more than the paper acknowledges.


The open-source war is the actual paradigm shift

Here's what the economic framing almost obscures.

The future Marsford describes doesn't hinge on whether AI becomes conscious. It hinges on who owns the weights.

Closed ecosystems — OpenAI, Anthropic, Google — are one future: AI as a metered utility, per-token pricing, black-box models, and a rewrite cost every time you want to switch providers. Open-weight ecosystems — Llama 4, Mistral, DeepSeek, Qwen — are another: download the weights, run inference locally, own your stack.

The cost delta isn't theoretical. At 50,000 daily requests, a closed API runs roughly $2,250/month. A local machine running open-weight models costs electricity. The Llama ecosystem crossed 52 million monthly Ollama downloads. HuggingFace hosts 135,000 GGUF-formatted models. Those numbers describe an industry shift, not a hobby project. Open-weight models in 2026 are production-viable for the majority of real workloads.

If the cognitarist future consolidates under five closed providers, developers don't get empowered by AI. They become tenants on someone else's cognition platform, paying a per-token tax on their own workflows.

History is not encouraging here. The internet decentralised publishing and centralised distribution. Cloud computing decentralised deployment and centralised infrastructure. AI may decentralise capability while centralising cognition itself.

Open weights are the only meaningful structural counterweight to that. Not because open source magically fixes capitalism — but because access to the weights is access to the productive layer. That's a political fact dressed up as a technical preference.


The crypto section is fan fiction

The paper spends real time on smart contracts, cognitive tokens, and DAO governance as coordination mechanisms for a cognitarist economy. Chaining probabilistic, hallucinating agents to deterministic financial systems with immutable execution isn't post-capitalism — it's a P0 incident waiting to happen at civilisation scale. One paragraph is more than it deserves.


The actual questions

The transition happening under the AI boom isn't about AGI or synthetic consciousness. It's about who owns the GPUs, who controls the weights, who can afford inference at scale, and who owns the distribution layer for intelligence itself. Marsford's paper frames it as a new economic era. That's probably right.

But for developers the stakes are immediate. If you're not treating open weights, local inference, and interoperable tooling as structural concerns — not just cost optimisations — you're sleepwalking into a stack you don't own.


The internet industrialised distribution.

AI is industrialising cognition.

The architecture hasn't merged to main yet. But the PR is open.

The paper — written by Axel Marsford and Leonardo Shell, published July 2025 — proposes a socio-economic system where artificial cognition replaces human labour as the primary source of value.

That sounds abstract until you realise it's just a formal name for what's already happening in your infrastructure budget.

Let's talk about what the paper gets right, where it loses engineers, and what the actual stakes are.


The Feudalism Parallel Is the Best Thing in the Paper

Marsford situates cognitarism in a lineage: feudalism → mercantilism → capitalism → platform capitalism → cognitarism.

The through-line is who controls the means of production.

Under feudalism, it was land.
Under capitalism, factories.
Under platform capitalism, data and distribution.

Under cognitarism, the paper argues, it's synthetic cognition itself — the models, weights, GPU clusters, training data, and inference pipelines.

That framing maps cleanly to infrastructure reality.

"Ownership of code and data thus becomes the locus of surplus appropriation."
— Marsford & Shell, 2025

Swap "surplus appropriation" for "cloud bill" and you have a sentence any senior engineer would nod at.

The paper is right that data ownership is the new land rights. It's right that whoever controls the compute layer controls the productive layer. And it's right that this is a structural shift, not a capability story.

The bottleneck in AI isn't ideas. It's compute, energy, and proprietary data.

AI data centers rely on GPU-based computation that consumes up to six times more power than conventional racks, and the new AI infrastructure is highly concentrated in North America, Western Europe, and Asia-Pacific, which together account for more than 90% of projected compute capacity.

Q1 2026 was the quarter AI infrastructure became constrained by energy. There are now multi-gigawatt development pipelines normalising, with 1GW+ campuses across the US, Spain, India, and Japan.

That's not an AGI story. That's an industrial consolidation story.

Infrastructure always consolidates. That's the paper's strongest insight, even when it's not fully saying it.


Where It Loses Engineers Immediately

The paper talks about "Autonomous Cognitive Production" as if current AI systems are reliable economic actors.

They aren't.

Anyone building production AI today knows the reality:

  • Context windows degrade mid-session
  • Agents loop endlessly on broken tool calls
  • Hallucinations compound across multi-step reasoning
  • Structured output fails unpredictably
  • Reasoning models invent APIs that do not exist The current state of AI engineering isn't "autonomous cognition." It's senior developers writing increasingly elaborate guardrails around stochastic parrots, then writing the incident reports when those guardrails fail anyway.

The paper occasionally reads like AGI is already sitting in staging waiting for deployment.

Meanwhile, developers are still spending hours debugging why an agent recursively called the same broken tool until the token budget caught fire.

The paper isn't wrong long-term. But it massively understates the engineering gap between "useful autocomplete" and "fully autonomous economic production."

Humans are still the reliability layer. That matters.


The Open-Source War Is the Actual Paradigm Shift

Here's what the paper's economic framing almost obscures.

The future Marsford describes doesn't hinge on whether AI becomes conscious. It hinges on a much more concrete question:

Who owns the weights?

Closed ecosystems — OpenAI, Anthropic, Google — represent one future: AI as a metered utility. You pay per token. The model is a black box. Switching providers means rewriting your integration layer.

Open-weight ecosystems — Meta's Llama 4, Mistral, DeepSeek, Qwen — represent another: AI as shared infrastructure. Download the weights, run inference locally, own your stack.

The cost delta is not theoretical. At 50,000 daily requests, OpenAI's GPT-4o API runs roughly $2,250/month. A local machine running open-weight models consumes only electricity.

Local inference on consumer hardware now delivers 70–85% of frontier model quality at zero marginal cost per request. Open-source AI in 2026 is no longer just the cheaper alternative to closed models — for coding, reasoning, agentic workflows, long-context analysis, and local deployment, open-weight models are now good enough for serious production use.

The Llama ecosystem crossed 52 million monthly Ollama downloads, with 135,000 GGUF-formatted models on HuggingFace. Those numbers describe an industry shift, not a hobby project.

If the cognitarist future Marsford describes consolidates under five closed providers, developers don't become empowered by AI. They become tenants on someone else's cognition platform, paying a per-token tax on their own workflows.

History is instructive here:

The internet decentralised publishing and centralised distribution.
Cloud computing decentralised deployment and centralised infrastructure.
AI may decentralise capability while centralising cognition itself.

Open weights are the only meaningful structural counterweight to that trajectory. Not because open source magically fixes capitalism — but because access to the weights is access to the productive layer, and that's a political fact dressed up as a technical one.


The Crypto Section Is Fan Fiction

The paper spends significant time on smart contracts, "cognitive tokens," and DAO governance as coordination mechanisms for a cognitarist economy.

One paragraph is enough:

Chaining probabilistic, hallucinating agents to deterministic financial systems with immutable execution is not post-capitalism. It's a P0 incident waiting to happen at civilisation scale. Skip it.


The Real Questions

The economic transition underneath the AI boom isn't about AGI gods or synthetic consciousness.

It's about infrastructure consolidation, and the questions that follow from it:

  • Who owns the GPU clusters?
  • Who controls the model weights?
  • Who can afford inference at scale?
  • Who owns the distribution layer for intelligence itself?

Marsford's paper frames it as a new economic era. That's probably right. But for developers, the stakes are more immediate:

If you're not thinking about open weights, local inference, interoperable tooling, and energy access as structural concerns — not just cost optimisations — you're sleepwalking into a stack you don't own.


The internet industrialised distribution.

AI is industrialising cognition.

The architecture hasn't merged to main yet. But the PR is open.

Top comments (0)