It's time to say goodbye to the black box: AI can explain itself to yourself
So far, finding out why a neural network makes specific decisions has always been one of the top concerns in the field of artificial intelligence.
A group of international researchers recently succeeded in teaching AI to advocate for its arguments, as well as pointing out the arguments it relies on to make a decision. The black box that we often see is becoming more and more obvious.
So far, finding out why a neural network makes specific decisions has always been one of the top concerns in the field of artificial intelligence. People call it "black box problem" , and this is also the leading reason why people can not put full trust in AI systems.
The team is a collection of researchers from UC Berkeley, the University of Amsterdam, the Max Planck Research Institute for Informatics, and Facebook's AI Research Team. This new research is based on a previous study of the group, but this time, they "taught" AI some new "games" .
Like humans, AI can "point out" the argument it has used to answer a question, and through text, it can describe how it interpreted that argument. Come on. This AI was developed to be able to answer questions that require the average intelligence of a 9-year-old baby.
According to a recent published report by the research team, this is the first time humans have created a system that can explain itself to two different ways:
"Our model is the first to be able to provide justifications for its decisions in natural spoken language, as well as being able to show arguments in an image."
Researchers have developed this AI so that it can answer questions posed in the usual language of images. It can answer questions about subjects and actions in a context provided. And it also explains your answer with descriptions of what it sees, while highlighting the parts related to your answers in the picture. You can see the illustration below to better understand:
AI can answer questions that are set in the usual language of images.
Of course, as a machine, it also sometimes gives the wrong answer. In the tests, the AI had difficulty determining whether a person was laughing or not, and it could not point out the difference between a person painting a room and a person using a vacuum cleaner.
But that is the crux of the problem: when a machine fails, we need to know why it makes the wrong decision.
In order for the AI field to achieve a level of sensibility like humans, we will need methods for debugging, error checking, and understanding the process of making decisions about machines. This is particularly urgent in the context of neural networks becoming more advanced and becoming our primary source of data analysis.
Creating a way for an AI to express its activity and advocate for itself in ways that even people without professional knowledge can understand is really a step forward. Great in preventing a robot-induced genocide - a frightening prospect that everyone is very worried about witnessing AI's rapid growth.
- The chaos in black holes
- Time to turn back inside the black hole
- Black hole universe is a time machine?
- Decoding mistakenly thought that the black hole of the universe is
- What is a black hole (black hole)?
- The mystery has been solved: Black holes swirl space and time
- What does the black hole center have?
- For the first time, two black holes revolved around each other
- Video: Top 5 biggest black holes discovered by NASA in 2017
- Goodbye Spirit
What does Greek mythology teach us about the dangers of AI? The aircraft black box is so important, why not sync its data to the 'cloud'? Who is David Warren that Google honored today March 20? Daily use inventions come from universities Found the black box of the Egyptian plane in distress Detect signals from the positioning of Egyptian aircraft Found missing Egyptian aircraft Why can the black box be