Humans have always wanted to create something that can think, understand, make decisions, and solve new problems like they do; this desire is why research on Artificial General Intelligence (AGI) has persisted for many years. Although this dream is not new, it has been the primary goal since the inception of AI.
First of all, it should be said that since the word "AI" was first used at the Dartmouth Workshop in 1956, researchers hoped that machines could be taken to a level where they could learn, reason, problem-solve, and understand language like humans, and it was at that time that the Turing Test, General Problem Solver, Universal Induction, and Programs with Common Sense—these ideas were created to understand that machines could also work at the level of humans in the future.
In the following years, AI made great progress in small areas, such as image classification, speech recognition, and machine translation—but all of them were narrow AI, that is, good only for specific tasks, and because of this, many researchers began to feel that the dream of creating general-purpose intelligence with great capabilities like humans was being pushed back and the main goal of creating AGI was being forgotten.
In this situation, around 2000, new terms such as AGI, human-level AI, and strong AI became popular, and researchers said that the time for narrow success was over; now it was time to work with big goals, because if we can create true general intelligence, AI would not be just a tool but a reasoning partner that could help people in all kinds of tasks.
But at the same time, there is also room for fear, because researchers say that if AGI someday becomes more capable than humans, then misalignment, loss of control, instrumental subgoals, autonomy, and catastrophic risk—these problems could arise, because if the machine misunderstands its own goals or wants to keep its own work alive, it may not work as humans say.
A survey among AI researchers found that 82% believe if AGI is developed by a private company, it should be publicly owned to prevent control by a single institution, as it could represent humanity's greatest invention and simultaneously pose the biggest risk; thus, it is crucial to keep it open and safe for everyone.
AI has made progress in almost every area—speech recognition, object detection, machine translation, generative models, multimodal models, robotics integration, system 2 style reasoning, test-time computation, Chain-of-Thought, deliberate reasoning, and the 2024 ARC-AGI benchmark—proving that the reasoning gap is now closing very quickly and that AI is performing at par with or better than humans in many tasks.
However, there are still major limitations, because AI is still not as strong as humans in long-horizon planning, hierarchical reasoning, spatial reasoning, geometric understanding, causal reasoning, counterfactual reasoning, and real-world embodied intelligence—and AI still makes simple mistakes in many multimodal situations, which shows that reasoning is not yet fully mature.
Achieving AGI requires extensive research into various approaches, including architectures beyond transformers, hybrid neural-symbolic architectures, graph neural networks, reinforcement learning agents, persistent memory systems, episodic memory, continual learning, self-supervision, simulation-driven learning, and generalization beyond training data. This is necessary because current LLMs become easily confused when they encounter situations outside their training distribution and do not adapt to new problems as flexibly as humans do.
Finally, researchers want to progress towards AGI in a slow, safe, and responsible manner. This is because 77% of them believe that AI should aim for an "acceptable risk-benefit profile," while 70% do not want research to cease, as they think that strong alignment, safety, interpretability, and governance—if developed properly—can provide significant benefits to humanity.
Share your thoughts or opinions about AGI in the comments section.
Source: https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-Digital-3.7.25.pdf
© Mejbah Ahammad (20 Nov 2025)
Top comments (0)