How can AI destroy humanity?

In May, hundreds of famous people around the world in the field of artificial intelligence signed a letter saying that AI will soon destroy humanity. 'The top priority is to eliminate any risk of AI genocide against humanity. This task must be put on the table along with other social issues such as pandemics and nuclear war' , excerpt of the letter.

According to the New York Times, the letter was written by the Center for AI Safety and the Future of Life Institute with the participation of many important figures in the technology industry such as OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, Geoffrey Hinton - was ordered Known as "AI tycoon" and worked at Google.

Picture 1 of How can AI destroy humanity?
Experts have repeatedly warned about the consequences of artificial intelligence. (Photo: ThinkStock).

The letter shows that AI experts are very concerned about this technology. They constantly warn people about the destructive consequences that artificial intelligence brings, the news site said.

Future AI can act and think like humans

Although they currently do not have the ability to destroy humanity's heat, many people are still concerned about the future as AI becomes more and more advanced, beyond human control.

They can do things that humans do not require. When humans find a way to intervene or disable it, AI can even fight back or 'clone' itself to continue operating.

'The current AI system cannot yet wipe out humanity. But in 1, 2 or 5 years, nothing is certain. This is the problem. We don't know when disaster will come,' said Yoshua Bengio - Professor at the University of Montreal.

Picture 2 of How can AI destroy humanity?
The automation capabilities of AI are increasing, the risk of them replacing humans is increasing. (Photo: New York Times).

An out-of-control AI scenario could arise when users ask machines to make as many paper clips as possible, but then they start losing control and turning everything, including humans, into resources. for its staple factory.

Experts question whether this scenario occurs in the real world. Currently, companies are constantly updating AI automation features , connecting them to physical facilities such as power grids, stock markets or even military weapons. Therefore, the risk of artificial intelligence causing negative impacts is entirely possible.

Some experts say that the end of 2022, when the ChatGPT fever spreads, is the time when they feel most concerned about the worst possible scenario. 'AI will gradually improve and become more and more automatic, the more they will be able to make their own rules and think just like humans' , founder Anthony Aguirre of the Future of Life Institute shared.

The risk of AI monopolizing the world

At some point, the people who operate society and the economy are giant machines, not humans, and humans have no way to neutralize them.

According to the New York Times, researchers are turning chatbots like ChatGPT into systems that can perform tasks based on user-provided text, such as AutoGPT.

Picture 3 of How can AI destroy humanity?
Currently, artificial intelligence systems still do not operate smoothly. (Photo: Independent).

Systems like AutoGPT are capable of generating their own computer programs. Users just need to grant access to the servers, the chatbot can operate them and do everything on the online platform from retrieving information, creating applications and updating them.

The limitation of these artificial intelligence systems is that they still do not operate smoothly and are easily caught in loops that cannot 'clone' themselves.

However, these shortcomings will soon be overcome in the future. 'Humans are trying to create self-improving systems. They may not be able to do it now, but they will be able to do it in the future. We can't know what day it is ,' said Connor Leahy - founder of Conjecture.

When researchers, companies and criminals task AI with 'making money', they can infiltrate banking systems, instigate illegal behavior and 'clone' themselves when someone wants disable them. Therefore, many experts are concerned that as AI becomes more advanced and trained by large amounts of data, it will have more misleading behaviors.