Fears of 'rebellious' AI robots

After the convincing little robot video

After a video of a small robot convincing its "teammate" to quit his job, many people are concerned about the prospect that AI-integrated robots could rebel and make their own decisions.

Last weekend, a viral video on Douyin, the Chinese version of TikTok, showed a robot luring 12 other animals away from work to follow it. Although the footage was experimental and the robot's behavior was dictated by humans, what happened in the video was terrifying to many.

"The video is staged by human control but there is still some concern. Many Hollywood science fiction movies from 20-30 years ago have now become reality. Will The Terminators, The Transformers appear in the near future?" , reader Duc Tuan commented.

Picture 1 of Fears of 'rebellious' AI robots

The T-800 robot, a prop from the movie The Terminator, on display at the DeutschlandDigital Exhibition 2023 in Germany. (Photo: Reuters).

"Many people are not aware of the dangers of AI. This is just the beginning, three years later things could go much further," another reader wrote.

In fact, there have been a number of cases where robots have killed people. The Independent reported in 2015 that a rogue robot was believed to have caused the death of a 57-year-old female engineer working at an auto parts factory in Michigan, USA. According to the description of the victim's husband, who also worked at the factory, the victim was "trapped under the automated machine".

According to a Pew Research survey in August of more than 11,000 Americans, 52% said their concern about the impact of AI on their lives outweighed their excitement, up from 37% in 2021 when the same number of people were surveyed.

Why are people "afraid" of autonomous robots?

According to Human Protocol , the idea of ​​AI and robots taking over the world has often been a driving force in science fiction films for decades. Series such as The Terminator, The Matrix , and I, Robot depict robots as a terrifying force that can spontaneously evolve and become super-intelligent, posing a great threat, even taking over and destroying humanity.

'The story has been fueled by the media and popular culture, creating deep concerns about AI in our consciousness ,' Human Protocol commented.

However, scholars believe that human fear of inanimate objects suddenly "coming to life" appeared long before Arnold Schwarzenegger played a killer robot and traveled back in time to threaten Sarah Connor like in 1984's The Terminator .

" Stories of inanimate creatures attacking humans date back to ancient Greece , " Adrienne Mayor, a historian of ancient science at Stanford University and author of the 2018 book Gods and Robots , told CBC . " People often make comparisons between the far-sighted and the near-sighted of artificial life ."

Mayor also said that the rise of advanced AI products like ChatGPT, combined with a wave of humanoid robots, has caused a lot of concern. 'These technologies tend to access incredibly large and complex data and then make decisions based on that data ,' Mayor said. 'Neither the data creator nor the end user will know how the AI ​​made those decisions.'

Citing previous research, Forbes said that people can have more than 500 different phobias. While there is no official phobia of artificial intelligence, there is a widely recognized condition called algorithmophobia — a fear of the subject.

One of the most talked about risks of late is AGI, or superintelligence , which can replace humans in performing various tasks. Unlike regular AI, super AI can learn and replicate itself. According to Fortune , AGI is predicted to be able to be self-aware of what it says and does. Theoretically, this technology makes people afraid in the future, especially when combined with physical machines.

' Military competition with autonomous weapons is the most obvious example of how AI can kill people,' associate professor David Krueger, an AI researcher at the University of Cambridge, told Fortune . 'A scenario of all-out war with AI-powered machines at its core is very likely.'

According to a Stanford University survey in April, about 58% of experts rated AGI as a 'major concern,' and 36% said the technology could lead to 'nuclear-level catastrophe.' Some said AGI could represent the so-called 'technological singularity' —a hypothetical point in the future when machines surpass human capabilities in a way that is irreversible and could pose a threat to civilization.

Current capabilities of AI robots

'But these fears are being amplified by AI, as machines become exponentially smarter and increasingly begin to apply their intelligence in a human-like manner,' Andy Hobsbawm, president of UseLoops, a UK-based research firm, said in a blog post. 'As AI continues to develop and take on tasks once considered uniquely human, the line between human and machine capabilities is becoming increasingly blurred. This blurring raises profound questions about the nature of intelligence and what it means to be human.'

However, Hobsbawm believes that this terrifying prospect is still far away, because current capabilities in AI and robots only stop at using existing human knowledge and synthesizing it. They are also often developed into specialized versions, serving their own needs and purposes.

In a report published by global management consulting firm McKinsey in mid-year titled The Risks of Artificial Intelligence , in terms of the immediate risks of AI, the issue is not whether robots will rebel, have the ability to rebel, or be able to make decisions on their own, but whether they will do exactly what humans tell them to do.

"Instead of fearing human-like machines, we should be wary of inhuman machines, because part of the downside of specialized robot intelligence is that it is extremely linear, meaning it lacks the rationality, adaptability, and common sense to behave appropriately," McKinsey commented.

Last week, for example, the Gemini chatbot insulted Vidhay Reddy, a 29-year-old student at the University of Michigan, when he asked for help with an assignment. Google's AI even called him a 'stain on the universe' and told him to 'die.' As chatbots are being used in robots to interact naturally with humans, many are concerned that their 'ignorance and sometimes meaningless responses' could lead to dangerous actions.

Another real risk is not that robots in the future will become conscious, but rather that hackers will infiltrate internal systems and manipulate robots to do their bidding; or that people with bad intentions will create an army of "mercenary" robots, specializing in carrying out harmful tasks.

Geoffrey Hinton, one of the pioneers of AI and recipient of the 2024 Nobel Prize in Physics, resigned from Google in 2023 so he could publicly warn about the dangers of AI. "When they start to write code and run their own code, killer robots will appear in real life. AI can be smarter than humans. Many people are starting to believe that. I was wrong when I thought it would take 30-50 years for AI to reach this level of progress. But things are changing so fast now," he said.

According to McKinsey, despite the optimistic view, humans need to be extremely careful and clear about the instructions they give to machines, especially data. Biased data will create biased machines and vice versa.

Meanwhile, according to Forbes , instead of being afraid, people should learn to adapt to the advancement of technology, including AI and robots. For countries, legislators need to gradually perfect laws in this area, ensuring that even though machines participate in the process, the final decision must still be made by humans.

"Will AI take over the world? No, it's just a projection of human nature onto machines. One day, computers will be smarter than humans, but it's a long way from that , " BBC quoted Professor Yann LeCun, one of the four founders of AI and currently the AI ​​Director of Meta, in June.

Update 22 November 2024
« PREV
NEXT »
Category

Technology

Life

Discover science

Medicine - Health

Event

Entertainment