DEV Community

Cover image for Did Elon Musk just invent AGI? Everything you need to know about Grok 4 and how to try it out

Did Elon Musk just invent AGI? Everything you need to know about Grok 4 and how to try it out

Jonas Scholz on July 10, 2025

Elon Musk’s AI company xAI just dropped a bombshell: Grok 4 is here — and it’s fast, smart, and already topping the charts. Some even say its AGI. ...
Collapse
 
artyprog profile image
ArtyProg • Edited

AGI is very far to be real, and certainly not with LLM, which is after all, a gigantic autocomplete.
The only way to get AGI is to intermix electronics and biological stuff, i am pretty sure.
But I absolutely don't wish it.
The first steps will be augmented humans and only after humanoïds with AGI capacities.
At that time I will be in R.I.P mode and I am happy with that.

Collapse
 
cheetah100 profile image
Peter Harrison

Carbon Chauvinism at work here, deny what is in front of your face because of insecurity. The existing LLMs are AGI by any reasonable definition, they are just not 'like humans' because the architecture is different. The are static, which has certain practical limits. But they are artificial, they are general purpose, and quite intelligent by any normal definition. They also are incapable of learning - so if you include this aspect perhaps you have a basis for your claim, but near as I can tell this is the final wall.

Collapse
 
artyprog profile image
ArtyProg • Edited

Thank you Peter,

Please don't judge me by saying I am denying anything. I work with LLM in my day work.
I perfectly kwow what they are capable of and not.

If you want to feel (perhaps) a little insecure as you think I am, listen to Alexandr Wang for example.

We could have infinite debates, well, not infinite, because my arguments would fail soon.
Consider in the end that it is a question of feeling, till know a human quality.

Regards

Thread Thread
 
cheetah100 profile image
Peter Harrison

My comment is not about you personally, but in reply to the idea that understanding the substrate means understanding the emergent. In terms of insecurity it is natural to try and claim we are somehow special. That is not to say we should imbue LLMs with human experience, or claim they are just as capable, which isn't true. Yet.

Collapse
 
theo_oliveira_40b15cfaf73 profile image
Theo Oliveira

Bro. AGI is marketing bullshit. If you knew a little bit of llm and ml works you really wouldnt be saying that at all. If you knowledge of Ai is from YouTubers, that's it.

Collapse
 
code42cate profile image
Jonas Scholz

Interesting, got any interesting papers in that direction?!

Collapse
 
wimadev profile image
Lukas Mauser • Edited

Elon, coming back to fix an idea he handed off to someone else

Collapse
 
xwero profile image
david duymelinck

Nobody is talking about the output tweaking Musk is doing?

Collapse
 
code42cate profile image
Jonas Scholz

do you mean the fucked up system prompt about political incorrect stuff?:D

Collapse
 
xwero profile image
david duymelinck

This is not the first time. It is becoming a pattern. How comfortable are you getting a model that has to fit one person's opinions?

Thread Thread
 
code42cate profile image
Jonas Scholz

good point. I guess it depends on the usecase, I probably wouldnt use it for political stuff, coding etc is probably fine

Thread Thread
 
xwero profile image
david duymelinck

Who says he is stopping at politics?

What if he alters a calculation to get a favourable number. And that calculation is also embedded in a calculation flow to measure something or generates something.
I don't know how farfetched it is, but it is something to think about.

I don't think it is only Grok, each AI should be scrutinized. The biases of people who create the LLM's are ending up in the output.

Collapse
 
donalda profile image
Donalda Feith

AGI will never be an LLM. They're going the exact wrong way with this, but I'll let them keep trying with more weights and more tokens. They think these models are the future, but they're wrong. (Keep an eye on China. They're already doing more because they don't have boards of execs and have to put everything to a vote.)

Collapse
 
cheetah100 profile image
Peter Harrison

This is just the substrate fallacy. Just because we built the substrate doesn't mean we understand how the emergent nature of what comes out of an LLM.

What is it about the current architecture do you think means that intelligence hasn't already emerged? We need to take care that we do not try to define 'AGI' as 'human like', because there are aspects of human intelligence that are not present in existing LLMs, such as real time model training, which is present in humans. In addition humans have emotional systems that use hormones which are absent in machine models. These systems drive base motivations - fighting, fleeing, um... mating.

Machine intelligence doesn't have these mechanisms yet, not the ability to learn in real time, or even have continuous perception currently. But what can't be denied is that general intelligence has emerged from these static models + context.

Collapse
 
thecodingthesi profile image
Thesi

feels like i'm switching between models daily at this point 🫨

Collapse
 
parag_nandy_roy profile image
Parag Nandy Roy

Grok 4 sounds wild ..

Collapse
 
code42cate profile image
Jonas Scholz

agree!

Collapse
 
jpjuni0r profile image
Jan-Philipp • Edited

It feels like the Messiah is talking to me.

Collapse
 
code42cate profile image
Jonas Scholz

2000 years ago we wouldnt probably thought llms are god lol

Collapse
 
pankaj_singh_1022ee93e755 profile image
Pankaj Singh

Loved Grok 4 and it is better than claude in terms of coding and other task....but AGI I doubt that...

Collapse
 
code42cate profile image
Jonas Scholz

Nice, how do you use it for coding?

Collapse
 
pankaj_singh_1022ee93e755 profile image
Pankaj Singh
Collapse
 
nicklas_rondot profile image
Nicklas

Thanks for the breakdown, saved me a lot of digging!
Two things I’m curious about:

  1. In regards to model transparency & safety – Any word on when xAI will release a full model card or red-team report similar to Anthropic and OpenAI?
  2. Roadmap for open weights / distillation – Elon hinted at open-sourcing “eventually.” Do we have a timeline, or rumors of 4-bit / QLoRA ports in the works?
Collapse
 
mile_kade_21603fc9becda49 profile image
Mile Kade

No model card or red-team report from xAI yet.

Open-source plans mentioned, but no date. Some 4-bit/QLoRA rumors, nothing confirmed.

Collapse
 
syed_zaminsy_5b13bbb7efc profile image
Syed zamin Sy

doubt we’ve reached real AGI yet. Elon knows how to hype things up, but let’s see what Grok 4 can actually do before we call it AGI

Collapse
 
code42cate profile image
Jonas Scholz

agree:)

Collapse
 
marcosomma profile image
marcosomma

hahaha I have an answer to this:
dev.to/marcosomma/no-agi-is-not-ne...

Collapse
 
datatoinfinity profile image
datatoinfinity

Something is coming up!!

Collapse
 
code42cate profile image
Jonas Scholz

Agree! I think we're quite a bit from AGI. I just think its a bit funny that with every small model increase people think its now AGI :D

Collapse
 
michael_nielsen_70ab83d55 profile image
Michael Nielsen

It's getting much better at giving us the illusion of AGI.. Still years away from true AGI.

Thread Thread
 
code42cate profile image
Jonas Scholz

Do you think our current architecture will get us there or a completely new approach is required?

Thread Thread
 
michael_nielsen_70ab83d55 profile image
Michael Nielsen

I think the currenct architecture is to far away from the way an AGI would work, I don't see it evolving into an AGI. Having said that, I think it's very likely that one of the current models will develop the AGI architecture for us at some point in a not to far away future.

Collapse
 
cheetah100 profile image
Peter Harrison

Well no, because prior models were already Artificial General Intelligence. Models are becoming more capable along a certain line - being able to answer one off questions with complete information.

What they are not able to do is learn from experience. The discussion you have today is forgotten tomorrow. The models are static, and not 'individuals'. No human on the planet would exceed the knowledge base of most LLMs right now, but no LLM is really capable of reliably replacing a human in many existing white collar roles.

This is because its like 50 first dates - what they experience today they forget tomorrow. You can't teach them the job. You can kinda fake it with a system prompt, or having a 'project' which simply front loads data into a context, but that isn't the same thing as training a model with that same data.

Humans learn and adapt to a specific context, then retain that learning. The reason for this limitation is fundamental to the current crop of AI; training through back propagation. It works, but is hideously computationally expensive. We know there must be a better way because we do it with a energy budget of 20W.