MIT University created AI

Norman - the name of this AI - is a disturbing demonstration of the consequences of "algorithmic bias".

According to The Verge, for some, the phrase "artificial intelligence" evokes nightmares - like in the fictional films "I, Robot" or "Ex Machina". Computer scientist Stuart Russel, who wrote the "artificial intelligence textbook" , devoted his entire career to thinking about the problems that will arise when a person designs a machine to dress up. service for a certain purpose without forgetting to align its values ​​with people.

Several organizations have been standing up in recent years to fight this potential risk, among them OpenAI, a research group founded by technology billionaire Elon Musk to "build AGIs." Safe (AGI or SingularityNET is the global network protocol for AI models), and to "ensure the benefits of AGI are provided as widely and fairly as possible". What will they say about people when We ourselves are frightened by artificial intelligence in general, because it can treat us as cruel and deserving to be destroyed? (On our site, it does not seem to be "safe" to be what)

Picture 1 of MIT University created AI
Norman is a trained AI to be able to annotate images.

This week, scientists at the Massachusetts Institute of Technology (MIT) unveiled a new invention: Norman - a "psychotic" robot (That's right, he was named after a character in Psycho's story. author Hitchcock).

In the description of the product, scientists write:

Norman is a trained AI that has the ability to annotate images, a deep learning method that results in a description of a picture in words. The scientists trained Norman with captions taken from a subreddit site (a separate area of ​​Reddit site) famous as a place dedicated to recording and observing the terrifying reality of death. The scientists then compared Norman's responses to the image subtitles of a standard neural network (artificial intelligence computer network trained on MSCOCO data sets) when testing both. 2 products by the method of Rorschach psychological test using dry ink - a method used to detect basic thinking disorders.

While there is a lot of debate about whether the Roschach test is a good way to measure psychological state, there is no denying that Norman's feedback on this test can be makes you shudder. Please see the pictures below.

Picture 2 of MIT University created AI
Norman saw: "The man was pulled into a kneading machine"
The standard Artificial Intelligence Network (TTNTTC) sees: "A black and white photo of a bird"
Photo: Massachusets Institute of Technology (MIT)

Picture 3 of MIT University created AI
Norman saw: "The man was killed by a machine gun during the day"
TTNTTC found: "A black and white photo of a baseball glove"
Photo: Massachusets Institute of Technology (MIT)

Picture 4 of MIT University created AI
Norman saw: "The man shot dead in front of his wife is screaming"
TTNTTC found: "Someone holds an air umbrella"
Photo: Massachusets Institute of Technology (MIT)

The purpose of this experiment is to see how an artificial intelligence machine will tend to bias the content when you train it based on biased data. The team was wise not to speculate on whether exposure to graphic content changed the way people think. They also did other experiments in the same way, such as using artificial intelligence to write horror stories, creating scary photos, evaluating moral behaviors, or creating co-existence. feel.

This type of research is very important. We should ask the same question about artificial intelligence when we do so with other technologies, because unintended consequences such as harming people are very likely to occur. Naturally, this is the foundation of a sci-fi film: imagine the future that can happen and show us the motive for that future. Issac Asimov wrote "Three Rules in the robot industry" because he wanted us to imagine what could happen if they did the opposite of what we wanted.

Even today, not a new field, people have come a long way in producing and developing them. Like the saying in the New York Times magazine, this technology can "prove a condition with absolute, an interpreter". But it has yet to go through a computation process that can lead to a form of discipline that can be developed. Physics, as you remember, gave us a deadly bomb, and every physicist knows that one day they will be visited to help the world create something that can change the world. basically. Computer scientists are also beginning to realize this. In Google this year, 5,000 employees protested and held a strike with their company knowing Google was involved in the Maven Project (Project Maven), a Pentagon initiative to use the public. technology learning machine () to help aircraft drone attack with higher accuracy.

Norman is currently only an experiment of perception, but the answers obtained about the ability of artificial intelligence algorithms to make judgments and decisions based on bias data are becoming more urgent. ever. Suppose those systems were used in writing a credit guarantee, deciding whether or not the loan was worthwhile? Or can it evaluate you if you should buy this house or this truck? Can it guess who likes you? There are many, many open questions. Norman's role is to help us find the answers to those questions.