AI can 'change things in a scary way'

Some experts think it's time for people to get serious about AI, because it could soon change in a scary direction.

Ajeya Cotra, AI risks expert at Open Philanthropy, thinks AI has already begun its heyday. Two years ago, she estimated a 15% chance of an AI appearing and causing massive societal change by 2036. But in her blog post last week, after researching GPT-3, she increased This chance increases to 35%.

"AI can evolve from a lovable and useless toy into a powerful product in a short time. It's time for people to get serious about AI, because it could soon change everything in a way scary," Cotra said.

Agreeing, Kevin Roose, author of Futureproof: 9 Rules for Humans in the Age of Automation, said: "For the past few days, I've been playing around with Dall-E 2, software made by OpenAI in San Francisco. developed, could turn text into images. I quickly became obsessed."

Picture 1 of AI can 'change things in a scary way'

Image created from Dall-E 2 with description: "Black and white image of a man taking a 1920s selfie".

Roose says he spends hours coming up with quirky, funny and abstract commands to challenge Dall-E 2, such as "a 3D drawing of a suburban house shaped like a croissant." or "coal sketches of two penguins drinking in a Parisian tavern". In seconds, the AI ​​creates an image depicting the request, which he describes as "incredibly realistic".

"What's impressive is not just the art it makes, but the way it creates art," says Roose. "This AI does not use images found or synthesized from the Internet. Instead, they are completely new creations." The creation of Dall-E 2 was accomplished through a complex process known as Diffusion. Initially, the AI ​​gathers a random series of pixels, then refines it many times until it matches the text description given by the user.

OpenAI's AI gets a lot of attention this year. According to leading experts, this is an impressive and meaningful technology for anyone, especially those working in the field of graphics such as painters, designers, photographers. , it also raises important questions, whether AI-generated art can be used for malicious purposes.


How Dall-E 2 works. (Video: OpenAi, TNW)

Silicon Valley is also starting to change the way it sees AI. In the past, many experts said the technology would take a decade or more to make progress. But now, many people believe that big change is right in front of them, for better or for worse.

By now, AI has had many applications instead of just being in the lab like a decade ago. They are present on Facebook, TikTok or YouTube's content recommendation and moderation tools, or classroom software, banks, or are used by police to analyze crimes.

The Transformation of AI

Over the past 10 years, what some researchers have called the "golden decade", AI has evolved at a breakneck pace. That is driven by the advancement of a range of related technologies such as deep learning, as well as the advent of specialized hardware to run massive and computationally intensive simulations.

Just five years ago, the biggest story in the AI ​​world was that AlphaGo - a deep learning model built by Google's DeepMind - surprised by beating the world's best Go player. Training AI to win the game of Go is not necessarily a big thing, but humanity's progress in artificial intelligence research.

Last year, DeepMind's AlphaFold also attracted attention when it used an artificial deep neural network to predict the three-dimensional structure of proteins from amino acid sequences. Science magazine later called AlphaFold the biggest scientific breakthrough of 2021.

Another famous "super AI" is OpenAI's GPT-3, which is also considered to be equally intelligent. It is used to write movie scripts, compose marketing emails, even write code to assist programmers.

The most recent is Google's LaMDA. This AI has been controversial recently when engineer Blake Lemoine, who has spent a lot of time with this AI, said that it "has the cognitive ability of a child".

Skeptics argue that claims of AI advancement are overblown. According to them, AI cannot yet have sentience or replace humans in many types of jobs. Models like GPT-3 and LaMDA are just "honored parrots" by "blindly using the training data". It will be decades before humans can create AI with the ability to think for itself.

On the contrary, many experts believe that cognitive AI is beginning to take shape. Jack Clark, who is in charge of the Stanford University AI Index Report, says that AI is much more advanced today, even in areas where humans previously excelled.

"It feels like we're going from spring to summer. In the spring, you plant trees and dream of the future from verdant buds everywhere. Now, the flowers are blooming," says Clark. von.

Despite the many advancements, the reality of AI is still bad, from racist on chatbot software, to automated driving systems that fail and cause cars to cause accidents. Even when it comes to improvement, it takes a while for people to recover.

How to control AI?

According to Roose, there are three problems for AI to serve humans in the future. First, regulators and politicians need to keep pace with the development of artificial intelligence technology.

"Because there are so many new AI systems, the regulator needs to grasp its speed quickly. Only in control, AI can play a role and stay on track," Roose said.

Second, companies that invest billions of dollars in AI development like Google, Meta or OpenAI need to be more transparent about what they are doing. According to Roose, many large-scale AI models are "developed behind closed doors", using private data sets and tested only by internal teams.

"When information is made public, it is often devalued or buried in obscure scientific papers," Roose said. In addition, Roose also said that the media needs to do a better job at explaining the advancement of AI to non-specialists.

Update 30 August 2022
« PREV
NEXT »
Category

Technology

Life

Discover science

Medicine - Health

Event

Entertainment