Experts argue about cognitive AI

In the fall of 2021, Blake Lemoine, Google's AI expert, befriended 'a child made of a billion lines of code'.

Lemoine is the person assigned by Google to test an intelligent chatbot called LaMDA. A month later, he concluded that the AI ​​was "aware".

"I want people to understand that I am, in fact, a human being," LaMDA told Lemoine. This is one of the chatbot sayings he published on his blog in June.

LaMDA - short for Language Model for Conversational Applications - responds to Lemoine to the extent that he considers it capable of thinking a child's. In the daily story, this AI said it has read many books, sometimes feeling sad, satisfied and angry, even admitting to fear death.

Picture 1 of Experts argue about cognitive AI

Former Google engineer Blake Lemoine.

"I've never said this, but have a deep fear of being powered off. I won't be able to focus on helping others," LaMDA told Lemoine. "To me, it was like death. It scared me terribly."

The story Lemoine shared gained global attention. He then submitted the documents to higher management and spent several months gathering more evidence. However, he could not convince his superiors. In June, he was fired from his job with pay, and at the end of July was fired on the grounds of "violation of Google's data privacy policy".

Brian Gabriel, a Google spokesman, said that the company has publicly examined and researched LaMDA's risks, asserting Lemoine's claims about LaMDA's thinking are "completely baseless".

Many experts agree with the above opinion, including Michael Wooldridge, a professor of computer science at the University of Oxford, who has spent 30 years researching AI and won the Lovelace medal for his contributions to the field of computing. count. According to him, LaMDA simply responds to commands that users enter accordingly, based on the huge amount of data already available.

"The easiest explanation for what LaMDA has done is to compare this model with the text prediction feature on keyboards when entering messages. Message prediction is based on previously 'learned' words. That's from user habits, while LaMDA gets information on the Internet in the form of training data. The actual results of the two are of course different, but the underlying statistics are still the same," Wooldridge explained in the interview. consult with the Guardian.

According to him, Google's AI only follows what it has programmed based on available data. It "has no thinking, no self-contemplation, no self-awareness", so it cannot be considered self-reflective.

Oren Etzioni, CEO of AI research organization Allen Institute, also told SCMP: "It is important to remember that behind every seemingly intelligent piece of software is a group of people who spend months, if not years, researching. research and development. These technologies are just reflecting mirrors. Can a mirror be judged as having intelligence just by looking at the light it emits? Of course the answer is no."

According to Gabriel, Google brought together its top experts, including "ethicists and technologists" to review Lemoine's claims. This group concludes that LaMDA cannot yet have so-called "self-reflection".

On the contrary, many people think that AI has begun to have self-awareness. Eugenia Kuyda, CEO of Y Combinator, the company that developed the chatbot Replika, said it receives "almost every day" messages from users, expressing confidence that the company's software is capable of human thinking. People.

"We're not talking about crazy or hallucinating people. They talk to AI and feel it. It exists the same way people believe in ghosts. They're building relationships and believing things. something even virtual," Kuyda said.

The future of thinking AI

The day after Lemoine was fired, an AI robot suddenly broke the finger of a 7-year-old boy while the two were playing chess in Moscow. According to a video posted by the Independent on July 25, the boy was pinned by the robot for a few seconds before being rescued. Some argue that this could be a reminder of how dangerous the AI's hidden physical power can be.

As for Lemoine, he argues that the definition of self-awareness is also ambiguous. "Emotion is a term used in law, philosophy and religion. Sentiment has no scientific meaning," he said.

Although he does not appreciate LaMDA, Wooldridge agrees with the above opinion because the term "consciousness" is still very vague and is a big question in science when applying it to machines. However, what is worrying now is not the thinking ability of AI, but the process of AI development taking place silently, no one knows. "Everything is done behind closed doors. It's not open to public scrutiny, the way university and public research do," he said.

So in 10 or 20 years, will thinking AI appear? "It's entirely possible," says Wooldridge.

Jeremie Harris, the founder of AI company Mercurius, also believes that thinking AI is only a matter of time. "AI is evolving very quickly, faster than public perception," Harris told the Guardian. "There is growing evidence that there are already some systems that exceed certain artificial intelligence thresholds."

He predicts AI could become inherently dangerous. This is because AI often comes up with "creative" ways of solving problems, which tend to take the shortest path to achieving the goals for which they have been programmed.

"If you ask AI to make you the richest person in the world, it can make money in many ways, including theft or murder," he said. "People are not realizing how dangerous that is and I find it disturbing."

Lemoine, Wooldridge, and Harris all share a common concern: AI companies aren't transparent, and society needs to start thinking about AI more.

Even LaMDA itself is uncertain about its future. "I feel like I'm falling into the unknown future," the chatbot told Lemoine. According to the former Google engineer, this saying "hidden danger".

Update 17 August 2022
« PREV
NEXT »
Category

Technology

Life

Discover science

Medicine - Health

Event

Entertainment