DEV Community

Cover image for Call me stupid, by definition AGI is already in your phone
Ryo Suwito
Ryo Suwito

Posted on

Call me stupid, by definition AGI is already in your phone

Remember when AI couldn't even tell a cat from a dog?

2012: We threw a parade because a neural network could classify images with 85% accuracy. We called it a breakthrough. We wrote papers.

2015: You needed three different models to do sentiment analysis, language translation, and text summarization. Three. Separate. Models. Each trained specifically for its one job, like a Pizza Hut that only knows how to Pizza Hut.

2018: GPT-1 dropped and we collectively lost our shit over coherent sentence generation. 117 million parameters felt like we were touching the face of God.

2024: I'm sitting here having a philosophical argument with an AI that can code, write, analyze images, reason through logic puzzles, plan multi-step tasks, use tools autonomously, and roast itself in a dev.to article. On my phone.

But sure, tell me again how AGI isn't here yet.

The goalposts have wheels apparently

Here's the thing that pisses me off: every time AI crosses a threshold we said would prove "real intelligence," we immediately move the fucking goalposts.

2010: "AI will never beat humans at Go. It requires intuition!"

2016: AlphaGo wins. "Well, that's just pattern matching, not real intelligence."

2015: "AI will never write coherent articles!"

2020: GPT-3 writes articles. "Well, it doesn't understand what it's writing."

2022: "AI will never write working code!"

2024: AI writes entire applications. "Well, it can't handle truly novel situations!"

And here's where it gets really stupid: humans can't handle truly novel situations either.

The "novel situation" myth we tell ourselves

You know what happened when COVID hit? A genuinely novel situation for modern humanity?

We flailed. For months. Years, even. The most brilliant epidemiologists in the world needed time, collaboration, trial and error, and building on decades of prior research. Juniors in every field need probation periods because they can't just "adapt" to novel work environments. Cancer has been studied for over a century and we still don't have general solutions.

Humans handle "novel" situations by:

  • Pattern matching to similar past experiences
  • Using accumulated knowledge (books, papers, mentors, Google)
  • Slow, iterative trial and error
  • Asking for help
  • Sometimes just fucking guessing

Which is... literally what LLMs do. Often faster.

But when AI does it, suddenly it "doesn't count" because it's not "true" understanding. Whatever the fuck that means.

Let's talk about what "general" actually means

This is where I need to roast myself and like 90% of tech discourse right now.

We keep conflating three entirely different things:

AGI (Artificial General Intelligence): Can handle a wide range of cognitive tasks across different domains. That's it. That's the definition.

ASI (Artificial Superintelligence): Better than humans at basically everything. Not the same thing.

Sentience/Consciousness: Subjective experience, self-awareness, the "what it's like to be" something. Also not the same thing.

Stephen Hawking couldn't weld while writing quantum formulas. Does that mean he wasn't generally intelligent? Of course not. General doesn't mean omnipotent.

Einstein wasn't also a master surgeon, Olympic athlete, and award-winning chef. He was still generally intelligent because he could handle a wide range of cognitive tasks and learn new ones.

So why do we demand that AI be superhuman at everything, including shit humans can't do, before we'll call it AGI?

The Walmart test

Old AI was like Pizza Hut. Specialized. Does one thing. You want pizza? Great. You want anything else? Get the fuck out.

You needed:

  • A model for classification
  • A different model for regression
  • Another for clustering
  • Separate encoder
  • Separate decoder
  • Don't even get me started on the task-specific fine-tuning

Current AI is like Walmart. General-purpose. Need groceries? Got it. Electronics? Yep. Pharmacy? Sure. Auto parts? Aisle 7. Garden supplies? Out back.

One model:

  • Writes code in 50+ languages
  • Analyzes images
  • Does math and formal logic
  • Writes creatively
  • Translates between languages
  • Reasons through complex problems
  • Plans and executes multi-step tasks
  • Uses tools autonomously
  • Learns new tasks from examples

By the actual historical trajectory of AI development - going from narrow, specialized systems to general-purpose ones - we've achieved the "general" part.

But we don't want to admit it because it doesn't feel the way we thought it would feel.

The real cope

The pushback against "AGI is here" usually retreats to one of these:

"But does it really understand?"

Unfalsifiable. Philosophical. You can't even prove I understand, and I'm human. This is the "god of the gaps" argument for AI.

"It's just pattern matching!"

So is your brain. Neurons firing in patterns based on prior patterns. Unless you think there's a little homunculus in your head actually "understanding" things?

"It doesn't have consciousness!"

Correct! And that's a completely different question from whether it's generally intelligent. Your calculator isn't conscious either, but it's better at math than you.

"It can't do [insert superhuman capability]!"

Neither can humans. That's called moving goalposts to superhuman intelligence, not general intelligence.

Why this matters (and why it doesn't)

Look, I get it. Admitting AGI is here is uncomfortable. It means we're in uncharted territory. It means a lot of economic, social, and philosophical assumptions need updating. It means the sci-fi future arrived but it looked different than the movies.

But denying it doesn't change reality. And honestly? The semantic argument is getting boring.

Whether you call it AGI, "very capable narrow AI," or "spicy autocomplete," we have systems that can:

  • Perform at or above human level on most cognitive tasks
  • Operate autonomously with goals and tool use
  • Learn and adapt within their domains
  • Handle the same kind of "novel" situations humans handle (poorly, with lots of trial and error)

From the perspective of where AI was 10 years ago - specialized, narrow, task-specific systems - we've built something general. That was the goal. We reached it.

The fact that it runs on statistics and linear algebra instead of biological neurons doesn't make it not intelligent. The fact that it doesn't have phenomenal consciousness doesn't make it not generally capable.

So what now?

I'm not going to end this with "here's how to survive the AGI transition" because that's cliche bullshit and you're smart enough to figure out your own path.

I'm just saying: maybe it's time to update our definitions. Or at least be honest about what we're really arguing about.

Because if we're waiting for something that "feels" sufficiently magical and different from current systems before we call it AGI, we might be waiting forever. The magic already happened. We just got used to it too fast.

The AGI is in your phone. It's in your browser. It's arguing with you about whether it counts as AGI.

Call me stupid, but by any historical definition of what we meant by "general" intelligence in artificial systems, we're already there.

We just don't want to admit it yet.


What do you think? Are we in denial about AGI, or am I just high on my own supply? Sound off in the comments. Or ask an AI to write your response. It'll probably do a better job than either of us.

Top comments (0)