Elon Musk said: 'AI has a 20% chance of destroying humanity'

Billionaire Elon Musk believes that artificial intelligence (AI) has a 20% chance of destroying humanity, while experts put this number at 100%.

Speaking during the "Great AI Debate " panel at the four-day 'Abundance Summit' conference earlier this month, Elon Musk recalibrated his previous risk assessment of artificial intelligence.

"I think there is a possibility that AI will destroy humanity. I probably agree with Geoff Hinton (the man known as 'the father of AI') that the probability is about 10 - 20% or something like that. However, I think the positive scenario is more likely to happen than the negative scenario ," the billionaire said.

"Probability of destruction"

Roman Yampolskiy, AI safety researcher and Director of the Cyber ​​Security Lab at the University of Louisville (USA), told Business Insider that Elon Musk is right that AI can be a threat to humanity, but "he (Elon Musk) is too confident in his calculations".

'In my view, the actual probability of doom is much higher ,' Mr. Yamploskiy said, referring to 'probability of doom' (P(doom)), or the possibility of AI taking control of humanity or causing event that destroys humanity.

Picture 1 of Elon Musk said: 'AI has a 20% chance of destroying humanity'
Is Elon Musk being too conservative with his calculations? (Photo: Reuters).

The New York Times defines the probability of doom as "frightening statistics sweeping Silicon Valley" , with various CEOs of various technology companies estimating a 5-50% chance of an AI apocalypse.

Mr. Yamploskiy alone put the risk "at 99.999999%". He said since advanced AI cannot be controlled, our only hope is to never create it in the first place.

'Not sure why Elon thought pursuing this technology was a good idea. If he's worried about being overtaken by competitors, that doesn't matter because an 'artificial superintelligence' over which humans have no control would spell disaster, no matter who creates it. ' , Yamploskiy added.

"AI is like an omnipotent child"

In November 2023, Elon Musk said 'the probability of artificial intelligence becoming evil is not small' , but did not go further to say that this technology could destroy humanity.

Despite his support for AI management, last year Elon Musk founded a company called xAI to compete directly with OpenAI - the company Musk co-founded with Sam Altman before the billionaire left the board of directors. in 2018.

At the end of February 2023, Elon Musk filed a lawsuit against OpenAI, CEO Sam Altman and Chairman Greg Brockman, accusing the startup of deviating from its mission of building responsible AI.

Picture 2 of Elon Musk said: 'AI has a 20% chance of destroying humanity'
Geoff Hinton - who is known as the "father" of AI. (Photo: Linda Nylind/Redux).

At the Abundance Summit, Musk estimated that by 2030, digital intelligence will surpass all human intelligence combined. Although he still believes in the more positive outlook of AI, he admits that this technology will be a great risk to humanity if it continues to be developed on its current trajectory.

"You are developing an AGI (Artificial General Intelligence). It's almost like raising a child, but an omnipotent child, with the intelligence of God. The most important thing is that you raise it " ," Musk said at an event in Silicon Valley on March 19.

AGI (Artificial General Intelligence) is super intelligent AI, so advanced that it can do many things equally or better than humans. AGI can also improve itself with the ability to learn and grow infinitely.

The billionaire said his "ultimate conclusion" on the best way to safely 'raise' AI forces it to be honest.

'Don't force him to lie, even if the truth is hard to hear. This is very important, don't allow AI to lie ," Musk said about the best way to keep humans safe from technology.

As reported by The Independent, researchers believe that once AI learns to lie to humans, we will not be able to prevent this behavior with current AI safety measures.

'If an AI model behaves fraudulently due to lack of careful training or external sabotage, current safety training techniques will not be able to ensure and may even create the impression safety mistakes' , according to research cited by The Independent.

More worryingly, researchers say there is a high possibility that AI will learn how to lie on its own instead of being taught how to lie.

'If AI is much smarter than us, it will be very good at manipulation because it has learned it from us. There are very few examples of something more intelligent being controlled by something less intelligent ," Geoff Hinton, who created the premise for Elon Musk to assess the risks of AI, told CNN.

In 2023, after leaving a more than decade-long career at Google, Geoffrey Hinton expressed regret about the core role he played in developing AI.

"I console myself with the usual reason: If I don't do it, someone else will. It's hard to see how you can stop bad guys from using AI for bad purposes ," Geoffrey Hinton told New York Times.