Whether the human assessment algorithm is reliable or not is due to facial features

While people often lack objectivity when making these assessments, artificial intelligence overcomes that disadvantage.

Psychologists have long known that people can quickly assess other people based on our looks, especially their faces. We use these judgments to determine whether the newcomer is credible, intelligent, or someone who is capable of overwhelming others, or who is sociable, humorous.

These decisions may be true or incorrect because they are completely non-objective, but they are quite reasonable. With the same face in the same conditions, people tend to judge in the same way.

And that poses an interesting possibility. Rapid advances in the field of computer vision and face recognition have made it easy for computers to recognize a variety of facial expressions, and it can even evaluate faces based on strength. interesting. So, can a machine look at a face and give a first impression like the way people do it?

Picture 1 of Whether the human assessment algorithm is reliable or not is due to facial features
People can quickly assess other people based on our looks.

So how did they do it?

Today, we have the answer thanks to Mel McCurrie's research at the University of Notre Dame and a few other colleagues. They have trained a machine learning algorithm to determine whether the face is credible or capable of overwhelming, in the same way that people do.

Their method is very simple. The first step of the machine learning process is to create a data set that the algorithm can learn from. This means that a series of images are labeled according to the way people judge them - who is trustworthy, who is vulnerable, who is intelligent and so on.

McCurrie and colleagues created this dataset using a website called TestMyBrain.org, a type of science project that aims to evaluate the psychological attributes of visitors to this website. This site is one of the most popular brain test sites, with more than 1.6 million participants.

The group asked participants to evaluate about 6,300 black and white photos of faces. Each face is judged by 32 different people about trust and overwhelming ability, there are also 15 others to judge the IQ and age of the face.

Picture 2 of Whether the human assessment algorithm is reliable or not is due to facial features
The interesting thing about this assessment is that there is no objective answer.

One interesting thing about this assessment is that there is no objective answer - the test simply captures the opinions of the reviewers. Of course, researchers can still measure their IQ and age to find out whether judges judge correctly. But McCurrie and colleagues don't care about that. All they want to measure is the impressive range of faces and train a machine to recreate that result.

After collecting this data, the group used about 6,000 images to train computer vision algorithms. They then used another 200 images to refine computer vision parameters. All of these trainings allow computers to judge faces in the same way as humans do.

McCurrie and colleagues saved the last 100 photos to test the accuracy of computer vision algorithms - in other words, to see if the machine made similar conclusions with humans.

Results of the algorithm

The results of the training are very interesting. Of course, the machine reproduces the same judgments that it learns from humans. When assessing a face, the machine offers the same values ​​of trust, overwhelming ability, age and IQ as people judge. Moreover, McCurrie's team said how the machine judged. For example, they can say which part of the face the machine uses to judge.

The group realized this by hiding different parts of the face and asking the machine to make judgments. If the results differ significantly from normal values, they assume that this part of the face must be very important. In this way, they can say, the machine relies heavily on the most part of the face to make its judgment.

The strange thing is that it turns out how the machine used to judge is similar to the way people rely . Social psychologists know that people tend to look at each person's mouth to assess their trust, and rely on the slope of the eyebrows to assess the overwhelming ability.

By learning from the training data, these are also areas where computer vision algorithms will look to make their assessment. "These observations show that our models have learned how to look at faces similar to how people do, and repeat how we judge each other." Mr. McCurrie and colleagues said.

Application of research

This leads to some interesting applications. Mr. McCurrie's group first applied it to acting. They used the machine to evaluate Edward Snowden's overwhelming trust and ability, Julian Assange from photographs of their faces. They then used the machine to make a similar assessment of the actors who played these two characters in recent movies - Joseph Gordon-Levitt and Benedict Cumberbatch.

Picture 3 of Whether the human assessment algorithm is reliable or not is due to facial features
The machine evaluates both actors in the same way as the characters they play.

This will help predict the crowd will assess the similarity between an actor and the character they play in the movie.

The result is very clear. It turns out that the machine evaluates both actors in a way similar to the character they play - for example, both have poor grades of trust."Our models' outputs predict a significant similarity between real-life characters and actors, confirming the accuracy of the film's description." Mr. McCurrie and colleagues said.

But the group could go even further. They apply computer vision algorithms to each frame in a movie, so they can rate the similarity changes over time. This also shows how actors perceive characters change over time. This is something that can be used in research, marketing campaigns and political activities .

Picture 4 of Whether the human assessment algorithm is reliable or not is due to facial features
This algorithm will also allow robots to predict and repeat those characteristics.

In addition, this study also shows many other directions in the future. Examples used to test how the initial impression changes between cultural and demographic groups.

This will help us find the factors that contribute to the formation of one's prejudices, which often depend heavily on subtle social characteristics. This algorithm will also allow robots to predict and repeat those characteristics.

An interesting consequence of this research lies in whether it can counteract human behavior? If someone discovered that his face was judged to be unreliable, how would that person react? Can they try to change this perception, for example by changing faces?