If you're confused about the weird grammar in the post title, I'm following the format of websites like Are we async yet, Are we learning yet, etc.. You can view a list of such sites here if you think I'm making it up π
https://wiki.mozilla.org/Areweyet
Whether or not we have reached AGI is a topic of much debate. This article is supposed to promote discussion about the topic. It also presents opinions, mostly mine, so keep that in mind as you read.
Defining Some Terms
Let's define some terms. What even is AGI? What is AI, for that matter? We need some concrete and agreed upon definitions before proceeding.
AI
Wikipedia concisely defines AI as "the simulation of intelligence as exhibited by software systems."
Simple enough. But what is intelligence? And what does it mean to have artificial intelligence?
Oxford Languages defines intelligence as "the ability to acquire and apply knowledge and skills." Currently, AIs can't train themselves (at least, not on a large scale), so any new knowledge they "acquire and apply" is ephemeral and goes away if it's no longer in the context window. Oxford Languages defines artificial as "made or produced by human beings rather than occurring naturally, especially as a copy of something natural."
I think of Artificial Intelligence as humans' attempt to copy one facet of their humanness - their intelligence - into something non-human. I say attempt because, just like artificial sweeteners can never perfectly clone sugar, I don't believe that artificial intelligence will ever become a perfect clone of human intelligence. Whether or not it can be better than human intelligence is beside the point (also, who's to say what makes one intelligence "better" than another?), the point is, we can't clone it.
To sum it up: AI is a simulation of human intelligence. However, there's no standard on how accurate the simulation of intelligence has to be for it to be considered intelligence. This is where AGI comes in.
AGI
AGI stands for Artificial General Intelligence.
As I said just a moment ago, there's no standard for how accurate the simulation of human intelligence (AI) must be in order to qualify as intelligence. In 2020, for example, OpenAI launched GPT-3... by today's AI standards, or compared to your average human, it did a terrible job. Inaccurate simulation. Now, in 2026 with GPT-5.2, GPT has become a much more accurate simulation of human intelligence. In many fields, GPT outperforms or nearly clones human intelligence. But not in all fields! GPT still sucks at complex math, making jokes, and even counting the number of letters in some words.
AGI is achieved when an artificial intelligence exists that is a near-exact clone of human intelligence in general (all fields), not just in specific fields. In other words, once the simulation of human intelligence becomes near-perfect (or better than human intelligence, but again, "better" is subjective) across all areas of human intelligence, we've reached AGI.
To sum it up: I basically took the long way of saying what Wikipedia says (as of Feb 1, 2026, at least), in order to hopefully make it a bit clearer what Wikipedia means:
Artificial general intelligence (AGI) is a hypothetical type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks.
So, are we AGI yet?
As of Feb 1, 2026, I'd say no. AI is evolving pretty quickly so we'll see how long it takes for that to be invalidated π
I don't think that current AI models simulate human intelligence accurately enough. They might outperform humans at quickly writing essays (complete with made-up sources just like some humans do π« ), for example, but they still struggle in areas like logic or counting.
I'm curious what you think! Please drop a comment with your thoughts about any part of this article; I would love to hear it!
Thanks for reading!
BestCodes
Top comments (0)