What is Artificial General Intelligence (AGI)?

Hanna
By Hanna

AGI refers to systems capable of learning all types of knowledge, demonstrating consciousness, surpassing human intelligence, and raising concerns among experts.

The Origins of AI and AGI

In June 1956, a group of scientists and mathematicians from across the U.S. gathered at Dartmouth College to discuss a groundbreaking topic so new that it didn’t yet have a proper name.
“They couldn’t agree on what it actually was, how to achieve it, or even what to call it,” Grace Solomonoff, widow of one of the scientists, recounted in Smithsonianmag. The subject of their debate was how to create a “thinking machine.”

The Dartmouth Conference marked the beginning of decades-long research into artificial intelligence (AI). Even earlier, in 1948, Alan Turing, the father of computer science, had already predicted the emergence of a general-purpose AI capable of thinking and learning like humans.

Over the years, most AI developments have focused on Narrow AI—systems specialized in performing specific tasks. For example, financial institutions use AI to predict stock market trends, Google employs it for heart disease diagnostics, and tools like ChatGPT and DALL-E are used for writing, poetry, and art creation.

However, what scientists truly aim for is AGI—Artificial General Intelligence. This concept, first discussed in 1997 by American physicist Mark Gubrud in automation debates, envisions machines that can think and reason like humans.

Defining AGI

AGI, though still theoretical, is broadly understood as a “superintelligence” capable of excelling across diverse fields and performing virtually any task.

At the TED AI event on October 17 in San Francisco, Ilya Sutskever, Chief Scientist at OpenAI, described AGI as being capable of understanding and analyzing data from various distinct sources. Once it accumulates enough knowledge, such a system could surpass human intelligence. It could also self-train to create new, even more advanced AGI systems.

Ian Hogarth, an AI expert for the UK government, explained that AGI would be creative, autonomous, and self-aware. “It understands communication context without additional cues. AGI will become a force beyond our control or comprehension,” he warned.

How Close Are We to AGI?

“I used to think it would take 20–50 years to achieve AGI, but everything is evolving so fast now. Our challenge is finding ways to control it,” said Geoffrey Hinton, Turing Award-winning computer scientist, in March during a CBS News interview.

By May, after over a decade with Google, Hinton resigned to openly warn about the dangers of this emerging technology. He believes the competition among Microsoft, Google, and other tech giants will escalate into a global race unchecked by regulations. AI, controlled by corporations and governments, could potentially be “weaponized” for harmful purposes.

That same month, Microsoft Research noted that OpenAI’s GPT-4 was edging closer to an AGI model.

On November 16, at the Asia-Pacific Economic Cooperation (APEC) Forum in San Francisco, OpenAI CEO Sam Altman hinted at groundbreaking progress: “In OpenAI’s four historic achievements, the latest occurred just weeks ago. We’ve almost unveiled the dark veil ahead, opening new frontiers of exploration. Achieving this has been a lifetime career honor.”

Altman didn’t detail the breakthrough but was abruptly dismissed a day later. Reports suggested the cause was a confidential letter from OpenAI researchers warning about Project Q* (Q-Star). This project, believed to be an early AGI prototype, was developed alongside ChatGPT. The letter highlighted the “dangers and potential power” of Q*, which reportedly had already surpassed high-school-level mathematics.

Unlike current AI models that excel in tasks like writing but occasionally fabricate answers, Q* could offer precise, single correct answers through advanced reasoning, resembling human intelligence.

On social media platform X, Elon Musk speculated that OpenAI might be developing “something terrifying.” “The world must know if OpenAI possesses something dangerous to humanity,” Musk stated on November 20.

Concerns Over AGI

When Hinton resigned in May, he expressed concern that AI might threaten human civilization due to its ability to process massive datasets and continually learn. “Once they start writing and executing their own code, real-life killer robots could emerge,” he warned.

“I fear a world filled with emotionless robots. It could be disastrous,” said David Chalmers, a philosophy professor at NYU, in 2019.

Prominent figures like physicist Stephen Hawking and billionaire Elon Musk have also warned about AI’s potential to destroy humanity. “Artificial intelligence could be the worst thing in human history. Eventually, it will become uncontrollable,” Hawking cautioned multiple times before his death.

Musk has echoed similar fears: “Mark my words: AI is far more dangerous than nuclear weapons.”

Regulation and the Future of AGI

To mitigate AI risks, some nations are enacting regulations promoting responsible AI development. Meanwhile, some experts argue that AGI remains far from being a reality capable of threatening humanity.

“AI has significant shortcomings. Models like ChatGPT are not smarter than humans—not even as smart as a dog,” said Yann LeCun, Meta’s AI Chief, at VivaTech in Paris in June.

Many researchers believe neither utopian nor dystopian AGI scenarios will occur. Instead, AI will remain a tool, akin to fire or language. It has risks, but its benefits far outweigh them. The key lies in its design and application.

“AGI will tackle any human task without being limited by how it’s set up—whether developing cures or discovering new renewable energy forms,” explained Tom Everitt, AGI safety researcher at DeepMind, Google’s AI division.

Jacques Attali, a French economist and sociologist, emphasized that AGI’s impact depends on human choices: “If we use AI to develop weapons, the results will be catastrophic. On the other hand, applying AI to health, education, and culture will yield extraordinary benefits.”

Share This Article
Hanna is a seasoned writer with over 10 years of experience, collaborating with top newspapers and specializing in psychology and educational books for high school students. She holds a Master’s in Literature from the University of London. Beyond writing, she enjoys playing the piano and cherishes her British Golden cat. Passionate about storytelling and education, Hanna continues to make a lasting impact.