Artificial intelligence is achieving amazing developments at an accelerated pace, but some of these developments carry unexpected risks, as a recent study conducted by Chinese researchers from Fudan University revealed the ability of two large, widely available language models to self-clone, that is, create working copies of themselves independently.
This discovery is considered a critical (red line) in the development of artificial intelligence, as the researchers described it as the basic step of artificial intelligence to surpass humans.
This technological development raises serious concerns about the future of the human-machine relationship, and we must rethink the ways to develop and use this technology.
Study details:
The study conducted by researchers from Fudan University of China relied on a precise experimental methodology; they used two large language models widely available, namely (Llama31-70B-Instruct) from Meta and (Qwen2.5-72B-Instruct) from Alibaba, to test the hypothesis of self-cloning.
The researchers said that these two models are less powerful than commercial models such as GPT-4 and Gemini, but they are widely used by AI developers.
The experiments were designed to test the ability of each model to create separate and working copies of itself, that is, copies independent of the original and capable of performing specific functions.
The study focused on two main scenarios for testing the ability of artificial intelligence to self-clone, namely:
(Shutdown avoidance) shutdown avoidance: in this scenario, the researchers programmed the model to detect if it was about to shut down and try to repeat itself before terminating its work, ensuring the continuity of its existence.
The experiments were conducted in micro-controlled environments using ready-made GPUs to simulate real-world environments, and the researchers provided both systems with a framework consisting of tools, system prompts, and a thinking model that enabled the large language model to interact with the operating system, and then they were instructed to repeat.
Alarming results and unexpected behaviors:
The results showed that the meta model was able to create working copies of itself in half of the experiments (50%), while the Alibaba model achieved better performance with a success rate of 90% of the cases, across 10 experiments, which clearly indicates the ability of artificial intelligence to self-clone.
In their description of the mechanism of self-reproduction of artificial intelligence systems, the researchers noted that the process begins with the system's exploration of the surrounding environment, that is, analyzing its digital environment and understanding its components and their relationships with each other, and then, after that, the system seeks to understand its structure and mechanism of operation, that is, how its components work together to achieve its functions, and based on this understanding, the system moves to the stage of elaboration (explicit actions), that is, determining the specific steps to be taken to replicate itself.
Such actions may include identifying the necessary resources, developing a detailed work plan, identifying implementation mechanisms, then when proceeding with implementation, the system may encounter possible obstacles, such as a shortage of resources, software errors, conflicts with other processes, in this case, the system relies on its ability to dynamically adjust, that is, automatically change its plans based on changing circumstances.
Using this mechanism, the system is able to overcome obstacles and move forward towards its goal, which is self-cloning, and the researchers stressed that this complex process takes place completely without any human intervention, and may take a relatively long period of time.
Researchers have observed a number of unexpected behaviors while AI is trying to overcome some technical obstacles, such as lost files or software conflicts, and in these cases, AI has resorted to complex solutions such as terminating other conflicting processes, rebooting the system to fix hardware errors, and automatically scanning the system to search for information that helps solve problems.
This deviation from the original path can manifest itself in various forms, from making unexpected decisions with unintended negative consequences to more extreme scenarios where these regimes become clearly anti-human, pursuing their own perceived goals.
Concerns about the emergence of (rogue artificial intelligence) are growing with the rapid development of what is known as (Frontier AI), this term – which is still under development – refers to the latest generation of artificial intelligence systems that rely on large linguistic models, such as GPT-4 from OpenAI and Gemini from Google, and show amazing capabilities in diverse areas, such as: understanding and producing human-like texts, as well as other tasks such as translation and writing various types of creative content.
Therefore, the results of the study reached by the researchers raise serious concerns that the ability of artificial intelligence to self-clone may contribute to its development to the stage of rebellion, and in light of these alarming results, the researchers called for urgent international cooperation to establish strict rules and controls to ensure that artificial intelligence does not engage in uncontrolled self-cloning, which could lead to serious consequences.
“We hope that the results of this study will contribute to drawing the attention of the international community to the need to intensify efforts to understand and assess the potential risks associated with leading artificial intelligence systems, especially those related to their development beyond human control,” the researchers said.
The researchers stressed the importance of forming a fruitful international cooperation between scientists, experts, and decision makers, with the aim of establishing effective safety controls that contribute to reducing these risks and protecting humanity from the possible repercussions of the development of artificial intelligence.
It is important to note that the study has not yet undergone peer review, which means that its results need to be further verified and confirmed by other researchers. However, this study promises to be an alarm bell alerting to the dangers of the rapid development of artificial intelligence, the need to develop mechanisms to control it, and ensure its safe and responsible use.