People must lie to robots for fear of being sad

In the future, robots will also be able to affect our emotions and vice versa, people sometimes have to try to fake emotions to avoid making robots look like the way you lie to your girlfriend to them. not so sad. Why is this silly prediction there? The reason is that researchers at the University of London and Bristol have discovered that robots that cause mistakes will apologize to people, and we tend to lie so that robots don't recognize the wrong way to avoid impact. to its emotions.

This is a study aimed at understanding the impact of robot expression and communication on human behavior. To do the research, the scientists added lips, eyes and eyebrows to the face of a robot named BERT2 so that it "expresses" as much as possible. After that, the research team from the two universities called for volunteers and asked them to communicate in the hypothetical situation of cooking with 3 versions of BERT2 corresponding to the appearance and state of expression. Different feel

Picture 1 of People must lie to robots for fear of being sad We tend to lie so that robots don't recognize the wrong way to avoid affecting its emotions.

Among 3 test robots, 1 BERT2 is silent and works without errors, 1 BERT2 is muted and programmed to create a silly mistake and 1 BERT2 can talk and react sentence yes no of the user. In a hypothetical kitchen situation, the only robot capable of talking will apologize when they do something wrong, such as dropping an egg, at the same time forcing something they are about to Fix it.

And the results of the volunteers' feedback showed that they like a talking robot. But the most interesting thing is that at the end of the experiment, the robot asked the user if he wanted it to do a certain job. Some people seem reluctant to answer that they do not want to because they are afraid of making the robot sad even though before, they seem to like silent robots but work effectively. A volunteer said: "I feel right to say no but I really feel bad when I say that. When the face of the robot is really sad when I hear it, I'm even more sad. I feel bad because the robot has tried its best to do its job. "

A group of other volunteers initially told the robot that "maybe" , but since it only accepts 2 yes or no answers, they eventually turn to yes. After finishing the experiment, they claimed that although they still like the robots that are silent and do the job, they still give the answer yes because they fear the "talkative" robot will be sad.

In fact before, people have found some evidence of human sympathy for robots. And this study has indicated that link is even deeper and more complex. However, the team thinks that depends on the expressive ability of the robot and how people change their perception because of it. Of course, there will be many other studies to be done to better understand the interaction between humans and robots before we live with it in the near future.