This leaderboard isnât a celebration. Itâs a hallucination with a high score.
The First Time I Met AGI
I first heard the concept of AGI, real AGI, around 2015 or 2016.
Not in a blog post. Not in a product pitch. But through long hours of internal digging. Trying to grasp what "general intelligence" actually means in cognitive terms, not hype cycles or funding rounds.
Back then, it was hard to even wrap my head around it. It took me months to begin to understand what AGI implied: The scope. The risk. The ontological rupture it represents.
So I went deep. I co-founded Abzu with some truly brilliant people.
And I tried to follow the thread down: from models, to reasoners, to cognition itself.
And Now?
Now it's 2025. And worse, nearing 2026. And weâre flooded with noise.
People who have never studied cognition, never touched recursive reasoning, never even defined what intelligence means are telling the world that:
âAGI is almost here.â
No. Itâs not.
And worse â AGI isnât even scoped yet.
Not correctly. Not rigorously.
We don't even agree on what "general" means.
What AGI Actually Demands
Hereâs what Iâve learned over nearly a decade, through ethology, architecture design, and cognitive experiments:
- Intelligence is not a monolith.
- It emerges from conflict. From separation of thought. From multiple perspectives that disagree, and then, sometimes, reconcile. There is no single model that can do that.
Why?
Because real intelligence isnât just statistical next-token prediction. Itâs contradiction, held in tension. Itâs the interplay between memory and intuition. Between structured logic and emotional relevance.
Between what I know now and what I used to believe.
What That Leaderboard Gets Wrong
The ARC-AGI leaderboard shows dots climbing a curve.
Cost vs. performance.
Tokens in, answers out.
Thatâs fine, for task-solving. But AGI isnât a task. AGI is a scope of adaptive cognition across unknown domains, with awareness of failure, abstraction, and reformation.
AGI needs to:
- Break itself apart
- Simulate internal dissent
- Reason in loops, not just in sequences
- Remember contradictions, not flatten them
- Develop subjective models of experience, not just text
None of that is visible in the chart. Because none of that is even attempted in most systems today.
The Real Danger
Itâs not the models.
Itâs the narrative.
Telling people AGI is near when the field hasnât even defined what it is thatâs not innovation. Thatâs cognitive malpractice. Weâre building scaffolding over a void. And convincing the public that weâve hit the summit when we havenât even drawn the map.
What Needs to Change
We need to stop chasing scores and start building systems of cognition.
- Multi-agent reasoning
- Deliberation loops
- Memory with scoped decay and identity
- Contradiction-aware execution
- Traceable thought, not just output
Thatâs why I built OrKa. Not because I think I have the full answer.
But because I know for a fact that single-model intelligence will never be enough. If AGI ever emerges and thatâs still an open question it wonât come from a bigger model. Itâll come from the orchestration of thought. From reasoning systems that can doubt themselves, disagree internally, and change their minds not just complete the sentence.
Final Word
To anyone who still believes AGI is a product you can wrap in a prompt:
Stop. To anyone whoâs been told âweâre almost thereâ:
- Donât listen to loud certainty.
- Listen to quiet contradiction.
- Thatâs where real intelligence starts.
And to the few of us who know the scope is still undefined:
Keep building. Keep doubting. Keep looping over your own beliefs.
Thatâs the only path that might, might, lead to something sensate.
Top comments (8)
THIS! Thank you đ
Yes, I'm a huge supporter of AI adoption, but you're 100% right. There's some major misconceptions around the concept as a whole and especially what it's really capable of doing, given it's current state.
Not only is this well written, but it's an innovative approach I haven't heard of before, which is exciting in itself (and given me some ideas on how to incorporate the topic in my own writings in the future).
Can't wait to see more of your work! đ
@anchildress1
Quick confession:
1 - The ideas are all mine...but... being honest DeepSeek just helped me organise the wording.
2 - What sounds âinnovativeâ is really a throwback to Marvin Minskyâs 1950s vision: lots of small agents working together. Somewhere along the way we fixated on purely statistical models, and thatâs why todayâs LLMs, impressive as they are, still arenât true intelligence.
I just try to treat them as sharp little tools that plug into a larger, genuinely intelligent system.
Thatâs why that AGI-hype chart makes me flinch. Excitement is fine, but someone has to call things as they are.
𤣠Oh, I agree with you completely! It baffles me sometimes, really...
If brains used spoken language as has been suggested by LLM conmen, all languages would be very similar, as all brains are very similar.
Should have studied neurolinguistics before building pointless data centres. Also, computers have difficulty taking advantage of quantum effects, so not much chance of producing the fairy dust.
Is never too late! And is fascinating se how AI acquire the Intelligent part reducing million of years of evolutions in few simple (wrong) statistic rules. Please do not take me wrong I think lates gen-ai progress are awesome and those system are super SMART but they have 0 intelligence... But you can see it only after understanding what Intelligence actually means and imply.
Can't wait to see more of your work!
This is a rare and necessary voice in a space too full of noise. Thoughtful, grounded, and unafraid to confront the uncomfortable gaps. Fully aligned.
hehehe thanks!