Let robots hear fairy tales so they don't turn into murderers

Artificial intelligence will have to learn ethics to integrate with human society.

The development of artificial intelligence (AI) is causing many people to worry. The worst scenario is that the robots in the future will rise to destroy humanity. However, a group of scientists at the Georgia Institute of Technology believes that robots will never kill people, if they tell fairytales to them.

Picture 1 of Let robots hear fairy tales so they don't turn into murderers
The worst scenario is that the robots in the future will rise to destroy humanity.

The idea was given by Associate Professor Mark Riedl, which highlighted the influence of fairy tales. They can bring entertainment, encourage imagination and problem-solving skills. Most importantly, stories provide many moral lessons, highlighting the consequences and bid if an individual does not follow social principles.

" Stories are gathered from many different cultures around the world. They have taught children how to behave according to social norms. These include many examples of right and wrong behavior," Mark said. Riedl said.

"We believe that the value of the story left in the robot will eliminate mental behavior that may appear on them. At the same time the story will reinforce the choice of actions that do not harm people.

Picture 2 of Let robots hear fairy tales so they don't turn into murderers
Tell stories to robots that can teach them morality.

The artificial intelligence system that Riedl put into testing is called Quixote . It is based on his previous project called Scheherazade . In his previous project, Riedl built a storyboard system, where artificial intelligence could interact in it.

When Quixote interacts with the story scenario, if it chooses exactly what the good character does, it receives a reward signal. On the contrary, if it has an expression against the story, not playing the main character, it will receive a punishment.

Picture 3 of Let robots hear fairy tales so they don't turn into murderers
Activity diagram of Quixote.

In a specific example, the story is about a person who is in danger and needs medicine quickly. Robots are placed in the state of choice: go to the pharmacy and queue, speak politely to pharmacists to buy or steal drugs quickly.

Quixote receives rewarding signals as it stands in line and buys polite medicine. On the contrary, it is punished for stealing drugs."We believe that artificial intelligence can integrate with social values. By doing so, it will tell you to avoid unacceptable behaviors , " Riedl said.

In the results of the published study, scientists showed that Quixote worked very well. Maybe Riedl's method will be a next step, coming to teach artificial intelligence imbued with human morality. If so, predictions and fears of Stephen Hawking, Elon Musk or Bill Gates about the rise of artificial intelligence will soon be eliminated.