Elon Musk’s AI company xAI just dropped a bombshell: Grok 4 is here — and it’s fast, smart, and already topping the charts. Some even say its AGI.
...
For further actions, you may consider blocking this person and/or reporting abuse
AGI is very far to be real, and certainly not with LLM, which is after all, a gigantic autocomplete.
The only way to get AGI is to intermix electronics and biological stuff, i am pretty sure.
But I absolutely don't wish it.
The first steps will be augmented humans and only after humanoïds with AGI capacities.
At that time I will be in R.I.P mode and I am happy with that.
Carbon Chauvinism at work here, deny what is in front of your face because of insecurity. The existing LLMs are AGI by any reasonable definition, they are just not 'like humans' because the architecture is different. The are static, which has certain practical limits. But they are artificial, they are general purpose, and quite intelligent by any normal definition. They also are incapable of learning - so if you include this aspect perhaps you have a basis for your claim, but near as I can tell this is the final wall.
Thank you Peter,
Please don't judge me by saying I am denying anything. I work with LLM in my day work.
I perfectly kwow what they are capable of and not.
If you want to feel (perhaps) a little insecure as you think I am, listen to Alexandr Wang for example.
We could have infinite debates, well, not infinite, because my arguments would fail soon.
Consider in the end that it is a question of feeling, till know a human quality.
Regards
My comment is not about you personally, but in reply to the idea that understanding the substrate means understanding the emergent. In terms of insecurity it is natural to try and claim we are somehow special. That is not to say we should imbue LLMs with human experience, or claim they are just as capable, which isn't true. Yet.
Bro. AGI is marketing bullshit. If you knew a little bit of llm and ml works you really wouldnt be saying that at all. If you knowledge of Ai is from YouTubers, that's it.
Interesting, got any interesting papers in that direction?!
Elon, coming back to fix an idea he handed off to someone else

Nobody is talking about the output tweaking Musk is doing?
do you mean the fucked up system prompt about political incorrect stuff?:D
This is not the first time. It is becoming a pattern. How comfortable are you getting a model that has to fit one person's opinions?
good point. I guess it depends on the usecase, I probably wouldnt use it for political stuff, coding etc is probably fine
Who says he is stopping at politics?
What if he alters a calculation to get a favourable number. And that calculation is also embedded in a calculation flow to measure something or generates something.
I don't know how farfetched it is, but it is something to think about.
I don't think it is only Grok, each AI should be scrutinized. The biases of people who create the LLM's are ending up in the output.
AGI will never be an LLM. They're going the exact wrong way with this, but I'll let them keep trying with more weights and more tokens. They think these models are the future, but they're wrong. (Keep an eye on China. They're already doing more because they don't have boards of execs and have to put everything to a vote.)
This is just the substrate fallacy. Just because we built the substrate doesn't mean we understand how the emergent nature of what comes out of an LLM.
What is it about the current architecture do you think means that intelligence hasn't already emerged? We need to take care that we do not try to define 'AGI' as 'human like', because there are aspects of human intelligence that are not present in existing LLMs, such as real time model training, which is present in humans. In addition humans have emotional systems that use hormones which are absent in machine models. These systems drive base motivations - fighting, fleeing, um... mating.
Machine intelligence doesn't have these mechanisms yet, not the ability to learn in real time, or even have continuous perception currently. But what can't be denied is that general intelligence has emerged from these static models + context.
feels like i'm switching between models daily at this point 🫨
Grok 4 sounds wild ..
agree!
It feels like the Messiah is talking to me.
2000 years ago we wouldnt probably thought llms are god lol
Loved Grok 4 and it is better than claude in terms of coding and other task....but AGI I doubt that...
Nice, how do you use it for coding?
docs.x.ai/docs/models/grok-4-0709
Thanks for the breakdown, saved me a lot of digging!
Two things I’m curious about:
No model card or red-team report from xAI yet.
Open-source plans mentioned, but no date. Some 4-bit/QLoRA rumors, nothing confirmed.
doubt we’ve reached real AGI yet. Elon knows how to hype things up, but let’s see what Grok 4 can actually do before we call it AGI
agree:)
hahaha I have an answer to this:
dev.to/marcosomma/no-agi-is-not-ne...
Something is coming up!!
Agree! I think we're quite a bit from AGI. I just think its a bit funny that with every small model increase people think its now AGI :D
It's getting much better at giving us the illusion of AGI.. Still years away from true AGI.
Do you think our current architecture will get us there or a completely new approach is required?
I think the currenct architecture is to far away from the way an AGI would work, I don't see it evolving into an AGI. Having said that, I think it's very likely that one of the current models will develop the AGI architecture for us at some point in a not to far away future.
Well no, because prior models were already Artificial General Intelligence. Models are becoming more capable along a certain line - being able to answer one off questions with complete information.
What they are not able to do is learn from experience. The discussion you have today is forgotten tomorrow. The models are static, and not 'individuals'. No human on the planet would exceed the knowledge base of most LLMs right now, but no LLM is really capable of reliably replacing a human in many existing white collar roles.
This is because its like 50 first dates - what they experience today they forget tomorrow. You can't teach them the job. You can kinda fake it with a system prompt, or having a 'project' which simply front loads data into a context, but that isn't the same thing as training a model with that same data.
Humans learn and adapt to a specific context, then retain that learning. The reason for this limitation is fundamental to the current crop of AI; training through back propagation. It works, but is hideously computationally expensive. We know there must be a better way because we do it with a energy budget of 20W.