DEV Community

Cover image for The Case against AGI

The Case against AGI

The AI industry appears to be careening towards AGI and Superintelligence (ASI) at a faster and faster rate. A few thousand days to Superintelligence says Sam Altman. Google AI Studio's product manager, Logan Kilpatrick, recently tweeted:

Straight shot to ASI is looking more and more probable by the month… this is what Ilya saw

But Microsoft AI CEO Mustafa Suleyman recently said it’s probably quite a number of years away. You think they’re talking about the same AGI? Or are there different definitions of AGI and even Superintelligence being bandied around? And once there is a definition is it even attainable?

Even though it may appear that AGI is the great target everyone is going towards, it’s really not a clear opinion in the tech industry. The supposition that there will be a Master Algorithm that does everything is clearly not embraced by all. Look at what AWS CEO Matt Garman said of Bedrock and we see that the AI foundations of Amazon Web Services is that there will be many models to choose from. It sounds like Bedrock is future-proofed for there not be be AGI.

”Part of it is we had this observation that it’s not just one model that everyone was going to want to use. There is a lot of different models that people were going to want to take advantage of.” - AWS CEO Matt Garman regarding Amazon Bedrock, AWS re:Invent 2024

Even Meta’s Mark Zuckerberg seems to agree. He seems to be saying AGI or ASI is playing God and that Meta will have nothing to do with it.

"I find it a pretty big turnoff when people in the tech industry... talk about building this 'one true AI … It’s almost as if they kind of think they’re creating God or something and... it’s just — that’s not what we’re doing. I don’t think that’s how this plays out.” - Meta CEO Mark Zuckerberg

But OpenAI is working on AGI and even ASI and they will soon have it?

Well, the new OpenAI O3 which passed the difficult ARC-AGI test indicated OpenAI’s reasoning engine has come quite a ways. But did everyone realize how they’re reasoning model actually works? It uses Reinforcement learning, so that it’s helped along by rules based learning. Which of course is not purely rule based but that component of experts guiding the model is an important component nevertheless. Isn’t that getting a thousand experts to tell the machine what to do? Can that really be called AGI or lead to Superintelligence beyond any shadow of a doubt?

OpenAI indicated there can be more tests that o3 can take. How many more? Are those the same tests that Google’s Multi-modular AGI will be tested on as it competes? What is intelligence and can the tech industry even define human intelligence?


Most would agree the concept of intelligence flows into the philosophical and many would say it even has a spiritual component. If the nature of intelligence flows into the philosophical then the religious leaders have a stake in that. Let’s see what they have to say about AGI.

Moshe Koppel, an American-Israeli computer scientist and Talmud Scholar, in his essay on AI and Judaism makes it clear that AGI is merely a theory. That even though AGI will be great in a “wide variety” of tasks, AGI is not a given.

"Based on the current rate of improvement, it has been argued that within around twenty years, AI will perform all cognitive tasks as well as humans. And once that is achieved, millions of artificial programmers could then be enlisted to achieve quickly general intelligence far beyond that of any human being."

"These projections need to be taken with many grains of salt. We don’t actually know that we can continue growing neural nets without running into the problem of overfitting, or that processing power will continue to grow exponentially, or a whole lot of other assumptions underlying these prognostications. Nevertheless, it’s safe to assume that in relatively short order AI will be able to duplicate and exceed the performance of the most skilled humans in a wide variety of cognitive tasks." - Moshe Koppel, an American-Israeli computer scientist and Talmud Scholar

What Artificial Intelligence Has In Store for Judaism


Yaqub Chaudhary, a Muslim scholar, says AGI (or Superintelligence) may even become an evil ‘Golden Calf’ worshipped by those who “have absorbed the fantasy of AGI” indicating perhaps it’s not a truly virtuous intelligence.

"In order to form the effigy of the calf, Al-Samiri convinced the people to melt down the gold valuables they were carrying, in effect destroying their cultural artefacts and heirlooms, based on the whims of a singular individual. Now, it is the cultural, artistic, and intellectual productions of humanity, the treasured artefacts of modern society, which are being de-materialised into undifferentiated amalgamations of tokens, and re-cast into the LLMs, foundation models, and generative AI systems costing billions of dollars to produce. Like al-Samiri and the people he led astray, the empty vocalisations and autonomous operations of these systems is leading to the veneration of the machine by those who have absorbed the fantasy of AGI into their hearts and those intoxicated by the allure of acceleration." - Yaqub Chaudhary, a Muslim scholar

The Future and the Artificial: An Islamic Perspective


Here Pope Francis says that there is an unbridgeable gap between humans and AI. AI systems will always be fragmentary. Therefore we can never really reach AGI or ASI.

"To date, there is no single definition of artificial intelligence in the world of science and technology. The term itself, which by now has entered into everyday parlance, embraces a variety of sciences, theories and techniques aimed at making machines reproduce or imitate in their functioning the cognitive abilities of human beings. To speak in the plural of “forms of intelligence” can help to emphasize above all the unbridgeable gap between such systems, however amazing and powerful, and the human person: in the end, they are merely “fragmentary”, in the sense that they can only imitate or reproduce certain functions of human intelligence. The use of the plural likewise brings out the fact that these devices greatly differ among themselves and that they should always be regarded as “socio-technical systems”. For the impact of any artificial intelligence device – regardless of its underlying technology – depends not only on its technical design, but also on the aims and interests of its owners and developers, and on the situations in which it will be employed."

"Artificial intelligence, then, ought to be understood as a galaxy of different realities." - Pope Francis

Pope Francis, ‘World Day of Peace’, 2024


Here Swami Chidananda, the famous Guru, says that words can’t really hold truth and be conduits of intelligence that well. Even multi-modality would just be superficial thoughts. True human intelligence goes much deeper.

"The wise old saying goes, “Sow a thought and reap an action. Sow an action, reap a habit. Sow a habit and reap a character. Sow a character and reap a destiny.” This saying, however, has a basic limitation. It makes thought the basis of character building. Thought, no doubt, have great power but true intelligence is deeper and subtler than thought. Thoughts and words are, at best, a great attempt to describe truth. They really cannot hold truth. That is the main difficulty."

"If the most sublime things like truth, intelligence, goodness and love could be held in the grasp of “thoughts and words”, all our educational and religious institutions would have succeeded in producing saints and visionaries en masse. All that glitters is not gold."

  • Swami Chidananda Saraswati, 'The Equanimous Mind', circa 2007

So behind the scenes of this Quick Rise to AGI, there’s a lot of Skepticism. AGI is quite a mountain to climb because the human brain has evolved for many thousands of years. And how exactly do we know when we get to AGI if we can’t even properly define it? And what of this recent quote by Google where the Google AI Studio lead seems to say we may not even know when we get to AGI?

“We are still going to get AGI, but unlike the consensus from 4 years ago that it would be this inflection point moment in history, it’s likely going to just look a lot like a product release, with many iterations and similar options in the market within a short period of time (which fwiw is likely the best outcome for humanity, so personally happy about this).” - Google AI Studio PM, Logan Kilpatrick

It's likely a smart thing to have a healthy skepticism in even the possibility of attainment of AGI and ASI.

Top comments (3)

Collapse
 
psantus profile image
Paul SANTUS

Thanks, that's a very interesting perspective!

Pope Francis has made some interesting statements. First, he says we are "tempted to draw general, or even anthropological, deductions from the specific solutions [AI] offers"

An important example of this is the use of programs designed to help judges in deciding whether to grant home-confinement to inmates serving a prison sentence. In this case, artificial intelligence is asked to predict the likelihood of a prisoner committing the same crime(s) again. It does so based on predetermined categories (type of offence, behaviour in prison, psychological assessment, and others), thus allowing artificial intelligence to have access to categories of data relating to the prisoner’s private life (ethnic origin, educational attainment, credit rating, and others). The use of such a methodology – which sometimes risks de facto delegating to a machine the last word concerning a person’s future – may implicitly incorporate prejudices inherent in the categories of data used by artificial intelligence. Being classified as part of a certain ethnic group, or simply having committed a minor offence years earlier (for example, not having paid a parking fine) will actually influence the decision as to whether or not to grant home-confinement. In reality, however, human beings are always developing, and are capable of surprising us by their actions. This is something that a machine cannot take into account.

(see Address to G7 session on Artificial Intelligence

Quite often, we tend to say "ok, the technology is neutral, it's up to the user to use it wisely". Pope Francis disagrees and has repeatedly called for the development of a new "algo-ethics" field:

mere training in the correct use of new technologies will not prove sufficient. As instruments or tools, these are not “neutral”, for, as we have seen, they shape the world and engage consciences on the level of values.

As even "open source models" are more "open-weight" than really "open source + open dataset", I would tend to agree with him!

Collapse
 
vidanov profile image
Alexey Vidanov • Edited

The debate around AGI and Superintelligence is as complex as it is fascinating, highlighting deep philosophical, technical, and ethical questions. The varying perspectives—ranging from cautious optimism to outright skepticism—show that the path to AGI is far from universally agreed upon.

The idea of a “Master Algorithm” or singular AGI feels like an oversimplification of intelligence, which is inherently multifaceted, as both AWS’s Matt Garman and Meta’s Mark Zuckerberg suggest. Their views align with the practical approach of leveraging diverse models tailored to specific tasks, reflecting the current state of AI as a tool rather than an all-encompassing entity.

Interestingly, religious and philosophical leaders add valuable dimensions to the discourse, emphasizing the spiritual and moral considerations often overlooked in purely technical conversations. Metropolitan Kliment of the Russian Orthodox Church offers a balanced perspective, stating,

If we set aside extreme positions, it becomes clear that there is a need to thoughtfully develop a reasonable, cautious, and at the same time pragmatic view of the problem.

He further emphasizes that while Orthodox thought sees AI as a tool for human creativity,

Orthodox thought sees the problem much more broadly than any secularized approach because it thinks in the fundamental categories of the relationship between God and man, in the context of man’s purpose to creatively transform the material world. This has already found reflection in some aspects of our understanding of AI.

His critique of blind techno-optimism as akin to the hubris of the builders of the Tower of Babel serves as a sobering reminder of the need to maintain humility and perspective in technological pursuits.

it is categorically unacceptable to declare AI a moral subject, a personality that independently makes decisions and bears ethical responsibility for them.

Advocates of boundless scientific and technological growth are confident that humanity has much to learn from machines, that the advantages of AI are the key to the survival of aging humanity. In these voices, there is so much disbelief in the capabilities of the human person, who is endowed with God-likeness, so much arrogance and shortsightedness, that one involuntarily recalls the builders of the Tower of Babel.

Metropolitan Kliment also highlights a fundamental challenge in the AI discourse:

The primary problem in the context of AI is not the technology itself, but the development of a unified terminology in this field.

Without a clear, universally accepted vocabulary, discussions around AGI and its potential implications risk becoming fragmented and incoherent. This linguistic gap underlines the importance of aligning definitions and frameworks to avoid misinterpretations and ensure productive dialogue.

Similarly, reflections from scholars like Moshe Koppel and Yaqub Chaudhary challenge the assumption that AGI is inevitable or universally beneficial, questioning whether our pursuit of AGI risks creating “golden calves” that could mislead society.

In this context, perhaps the real question is not just whether AGI is attainable but whether it’s the right goal for humanity. As Metropolitan Kliment implies, shouldn’t the focus remain on using AI to augment human potential while preserving the core values, ethical frameworks, and clarity of purpose that define us?

Collapse
 
juantaylor profile image
Juan Taylor • Edited

AGI is probably tied in with the concept of Wisdom. So whether it's the right goal for humanity is tied in with that. The right vocabularly is crucial as Metropolitan Kliment says and that's probably part of the chaos we're seeing now.

"there is so much disbelief in the capabilities of the human person, who is endowed with God-likeness"

Exactly!