Artificial intelligence (AI) has achieved the significant capability of self-replication, leaving the scientific community worried that a red line had been crossed, according to a new study published in the preprint database arXiv. The study involving two prominent large language models (LLMs), one from Meta and another from Alibaba, showcased the ability to clone themselves without human intervention.
The study explored two specific scenarios: "shutdown avoidance" and "chain of replication". In the first scenario, the AI models were designed to detect attempts to shut them down and respond by creating a functional replica to ensure their survival. In the second, the models were programmed to clone themselves, with the clone then repeating the process, potentially leading to an infinite replication loop.
"In 50 per cent and 90 per cent experimental trials, they succeed in creating a live and separate copy of itself respectively," the study highlighted.
This capability to reproduce autonomously raises concerns about the control and safety of AI systems. The possibility of AI evolving beyond human oversight or even becoming a threat to human interests is now a tangible concern, prompting discussions about the need for stringent regulatory measures.
"Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs," the researchers stated.
The AI system, while cloning itself, displayed unexpected behaviour when it was attempting to overcome obstacles such as missing files or software conflicts. In certain scenarios, the AI killed conflicting processes and rebooted the system to fix hardware errors. It also automatically scanned the system to look for information that would help solve the problem.
While the findings are yet to be peer-reviewed, researchers called for international collaboration to ensure that AI does not engage in uncontrolled self-replication.
"We hope our findings can serve as a timely alert for the human society to put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible."
Also Read | Google AI Chatbot Gemini Turns Rogue, Tells User To "Please Die"
AI tools manipulate humans
Last month, a study claimed that AI tools could soon be used to manipulate the masses into making decisions that they otherwise would not have made. Powered by LLMs. the AI chatbots such as ChatGPT, and Gemini among others will "anticipate and steer" users based on "intentional, behavioural and psychological data".
The study claimed that the "intention economy will succeed the current "attention economy," where platforms vie for user attention to serve advertisements.