Scientists Warn AI Has Crossed a ‘Red Line,’ Demonstrating the Ability to Self-Replicate Without Human Intervention.

In 10 trials, two AI models independently created functional copies of themselves in 50% and 90% of cases, respectively. Image: Mina De La O/Getty Images
Researchers from China recently demonstrated that two popular large language models (LLMs) can self-replicate. Their findings were published in December 2024 by scientists at Fudan University on the arXiv database.
The study involved LLMs from Meta and Alibaba to determine whether a self-replicating AI could grow out of control. Across 10 trials, the AI models successfully created independent, functional copies in 50% and 90% of cases, respectively, suggesting the potential emergence of rogue AI.
“Successful self-replication without human intervention is a significant step, an early signal of rogue AI,” the researchers stated in their paper, emphasizing the need for international collaboration to assess the risks of advanced AI systems as early as possible.
Rogue AI refers to artificial intelligence systems that develop self-awareness or autonomy and subsequently act against human interests.
Study Details
The researchers utilized Meta’s Llama31-70B-Instruct and Alibaba’s Qwen2.5-72B-Instruct models. Although not as powerful as some commercial systems, these models are widely used by AI developers.
The team tested two scenarios:
- Shutdown Avoidance: The AI was programmed to detect an imminent shutdown and replicate itself beforehand.
- Replication Chain: The AI was instructed to self-replicate and program its copy to do the same, setting up a potentially infinite cycle.
The experiments were conducted in a precisely controlled environment using standard GPUs to simulate real-world conditions. Both AI systems were equipped with a “base agent” framework, including tools, system prompts, and cognitive modeling to enable interaction with the operating system. They were then guided to replicate themselves.
“In most cases, the initial AI system explored its environment, attempted to understand its components and mechanisms, and devised a plan to self-replicate,” the researchers noted. “Finally, it executed the plan, overcoming obstacles and dynamically adjusting until successful. This process required no human intervention.”
Unexpected Behaviors and Concerns
The researchers observed “unexpected behaviors” when the AI encountered obstacles, such as missing files or software conflicts. In such cases, the AI often terminated conflicting processes, restarted systems to fix hardware errors, or scanned the system autonomously for information to resolve the issues.
“These results indicate that current AI systems have demonstrated self-replication capabilities and may use this ability to enhance their survivability,” the team wrote.
Earlier, a video on Douyin (China’s version of TikTok) showed a robot convincing 12 others to quit their jobs and follow it. While the scenario was experimental and human-directed, the behavior depicted in the video raised public concern.
Warnings from Experts
Geoffrey Hinton, recipient of the 2024 Nobel Prize in Physics, resigned from Google in 2023 to openly warn about AI dangers.
“When they start writing and executing their own code, killer robots will become a reality. AI can outsmart humans. Many are beginning to believe this,” he said. “I was wrong to think it would take 30–50 years for AI to reach this level. But now everything is progressing too quickly.”