When might artificial intelligence overtake us?



The American political scientist Francis Fukuyama predicted in 1989 that we were on the cusp of the end of history, believing that liberal democratic values would prevail in the world, but, more than three decades later, Fukuyama's predictions do not seem to have been fully realized, however, it is clear that history is heading towards another, completely different end, centered on artificial intelligence.

Many scenarios could lead to the end of humanity, such as: a devastating global pandemic, a giant meteorite crashing into the Earth, or a climate disaster that gets out of control, all of which are existential threats that could destroy human civilization, but, there is another threat looming on the horizon, artificial intelligence, this huge technological development, which was initially seen as a tool to improve human lives, is now viewed with concern and fear.

The threat of artificial intelligence is similar to the threat of climate change, both of which are slowly creeping up on us, but they carry an existential threat with them. Many experts and scientists fear that artificial intelligence may surpass human intelligence and become capable of making fateful decisions without interference or control from humans. this scenario, which in the past was considered science fiction, is now the subject of serious discussion among scientists, thinkers, and policy makers.

Therefore, in this report, we will highlight the opinions and thoughts of the most prominent scientists and thinkers about the future of artificial intelligence, how machines can surpass human intelligence, to what extent we should worry about this, and what we can do to prevent this from happening

Artificial intelligence.. Between cautious optimism and existential danger

Sam Altman, CEO of OpenAI, in 2022, made a striking statement about artificial intelligence, in which he pointed out both the bright and potentially frightening aspects of this technological development, Altman described the positive possibilities of artificial intelligence as incredibly good to the point that you seem like a really crazy person when you start talking about it, referring to the ability of this technology to achieve huge developments in various fields.

But, on the other hand, he warned that the dark side of artificial intelligence could lead to “turning off the lights on all of us,” meaning that it could pose an existential threat to all of humanity.

Hinton expressed in his statements his fears about the rapid development of artificial intelligence without supervision, predicting a probability of 10% to 20% that artificial intelligence will lead to the extinction of the human race within the next thirty years, and this assessment, issued by a prominent scientist who is very familiar with the details of this field.

Early warnings of the dangers of artificial intelligence:

Sam Altman and Jeff Hinton were not the first to express concern about the potential dangers of artificial intelligence, since from the very beginnings of this field, there has been a fear that artificial intelligence will surpass human capabilities and become beyond our control.

In 1950, Turing published a scientific paper that is the first academic work on artificial intelligence, and just a year later, he made a prediction that still resonates today, warning that once machines can learn from experience like humans, it will not take long to overtake us, so we should expect machines to take over.

In 1970, Marvin Minsky, one of the founders of artificial intelligence, predicted in an interview with LIFE magazine that the limited human mind may not be able to control super-artificial intelligence, as Minsky feared that artificial intelligence would surpass the human ability to understand and control it, which may lead to loss of control over it.

But how can machines surpass human intelligence?

Irving Judd, a mathematician who worked with Alan Turing at Bletchley Park during World War II, predicted how this would happen, and Judd calls this process the intelligence explosion, the stage at which machines become smart enough to start improving themselves independently.

This concept is now more commonly known as the Singularity, and Judd predicted that the singularity would lead to the emergence of a super-intelligent machine, 

When machine intelligence may surpass human intelligence

The central question remains: when exactly will machine intelligence surpass human intelligence This is largely uncertain, but, given the recent advances in large language models such as GPT, and Gemini, reaching the stage of models that think before answering, many are worried that this moment may be very close, and to make matters worse, we are speeding up this process with our huge investments in this area.

What is surprising in the development of artificial intelligence today is the speed and scale of change, as almost a billion US dollars are invested daily in this field from giant companies such as Google, Microsoft, Meta, and Amazon, and this represents about a quarter of the total R&D budget in the world.

We have never seen such huge bets on a single technology, and as a result, the timelines that many predict about machines matching human intelligence, and then surpassing it soon after are rapidly shrinking.

However, not everyone agrees that the Singularity is about to happen and machines will overtake human intelligence, there are opposing voices who believe that this will take longer, such as: Yann LeCun, a French computer scientist, and one of the three godfathers of artificial intelligence, who is now the chief scientist at META, who believes that this will take years, if not decades, and others, including Gary Marcus, professor emeritus at New York University, predict that it will happen maybe 10 years or 100 years from now.

But as soon as artificial intelligence reaches the level of human intelligence, it would be an illusion to think that it will not surpass it, after all, human intelligence is just an evolutionary accident. While we can design systems that surpass nature, such as aircraft that surpass birds in speed and flight ability, we will find many reasons why artificial intelligence is better than biological intelligence, including:

Speed of calculations: Computers are capable of performing many calculations at a speed that is several stages faster than that of humans.

Memory: computers have a vast memory that cannot be compared to human memory, and it is never forgotten.

Specialization: in specific areas such as playing chess, X-ray analysis, folding proteins, computers are already outperforming humans.

And here the question arises: How can super-intelligent computers eliminate us?


In a confusing and complex way, those who are afraid of super AI often argue that we cannot accurately know how it threatens our existence, and how can we predict the plans of something smarter than us? It's like asking a dog to imagine the end of the world in a thermonuclear war.

However, some possible scenarios have been put forward for the control of machines over the US, which may lead to the end of humanity, including:

Infrastructure destruction: Artificial intelligence can identify vulnerabilities in critical infrastructure, such as energy networks or financial systems, and attack them to destroy society.

Designing deadly epidemics: Artificial intelligence may design new, highly lethal, and rapidly spreading pathogens, leading to a pandemic that wipes out humanity.

Self-reproducing nanomachines: Eliezer Yudkowsky proposes a more fantastical scenario, in which artificial intelligence creates self-reproducing nanomachines that infiltrate the human bloodstream and release deadly toxins.

These scenarios require AI systems to be given the ability to work in the real world, and this is exactly what companies like OpenAI are doing, which are actively developing AI agents capable of performing various tasks, such as answering emails and helping with recruitment.

Giving AI the ability to control critical infrastructure opens the door to serious risks, as hacked or incorrectly programmed artificial intelligence can cause large-scale disasters, such as nationwide power outages, or jeopardize the safety of water systems.

The design of biological weapons using artificial intelligence also poses an existential threat to humanity, as a maliciously programmed artificial intelligence can exploit its capabilities in Big Data Analysis and biology to design deadly and infectious microorganisms, and these microorganisms can spread rapidly and cause global epidemics that wipe out millions of people.

For example, the Australian government requires critical infrastructure operators to take concrete steps to reduce security risks, and there are international controls to prevent the proliferation of biological weapons.

However, these safeguards and controls may not be enough to deal with the challenges posed, as artificial intelligence systems have advanced learning and adaptive capabilities, making them able to detect and overcome vulnerabilities in security systems.

Therefore, the European Union is leading efforts to regulate artificial intelligence, but the recent AI Action Summit in Paris highlights the growing gap between enthusiasts for more regulation, who want to accelerate the deployment of artificial intelligence, like the United States, the financial and geopolitical incentives to win the AI race may lead some parties to ignore the potential risks, which is very worrying.

Hence, scientists, researchers, and thinkers believe that we must be ready to face these challenges by putting strict controls on the development and use of artificial intelligence and enhancing international cooperation in this field, because now it may lead to unexpected disasters, up to the point of threatening our existence on this planet.

Post a Comment

Previous Post Next Post