It's time to say goodbye to the black box: AI can explain itself to yourself

So far, finding out why a neural network makes specific decisions has always been one of the top concerns in the field of artificial intelligence.

A group of international researchers recently succeeded in teaching AI to advocate for its arguments, as well as pointing out the arguments it relies on to make a decision. The black box that we often see is becoming more and more obvious.

So far, finding out why a neural network makes specific decisions has always been one of the top concerns in the field of artificial intelligence. People call it "black box problem" , and this is also the leading reason why people can not put full trust in AI systems.

The team is a collection of researchers from UC Berkeley, the University of Amsterdam, the Max Planck Research Institute for Informatics, and Facebook's AI Research Team. This new research is based on a previous study of the group, but this time, they "taught" AI some new "games" .

Like humans, AI can "point out" the argument it has used to answer a question, and through text, it can describe how it interpreted that argument. Come on. This AI was developed to be able to answer questions that require the average intelligence of a 9-year-old baby.

According to a recent published report by the research team, this is the first time humans have created a system that can explain itself to two different ways:

"Our model is the first to be able to provide justifications for its decisions in natural spoken language, as well as being able to show arguments in an image."

Researchers have developed this AI so that it can answer questions posed in the usual language of images. It can answer questions about subjects and actions in a context provided. And it also explains your answer with descriptions of what it sees, while highlighting the parts related to your answers in the picture. You can see the illustration below to better understand:

Picture 1 of It's time to say goodbye to the black box: AI can explain itself to yourself

Picture 2 of It's time to say goodbye to the black box: AI can explain itself to yourself

AI can answer questions that are set in the usual language of images.

Of course, as a machine, it also sometimes gives the wrong answer. In the tests, the AI ​​had difficulty determining whether a person was laughing or not, and it could not point out the difference between a person painting a room and a person using a vacuum cleaner.

But that is the crux of the problem: when a machine fails, we need to know why it makes the wrong decision.

In order for the AI ​​field to achieve a level of sensibility like humans, we will need methods for debugging, error checking, and understanding the process of making decisions about machines. This is particularly urgent in the context of neural networks becoming more advanced and becoming our primary source of data analysis.

Creating a way for an AI to express its activity and advocate for itself in ways that even people without professional knowledge can understand is really a step forward. Great in preventing a robot-induced genocide - a frightening prospect that everyone is very worried about witnessing AI's rapid growth.

Update 14 December 2018
« PREV
NEXT »
Category

Technology

Life

Discover science

Medicine - Health

Event

Entertainment