Lie detectors will be

DARE is taught to be able to search and classify the smallest human expressions, as well as analyze the sound frequency of the voice, to determine whether the person is lying or not.

 

Knowing when a person is lying is a very important part of everyday life, but it is even more urgent in a trial. People can swear they will always tell the truth, but they don't always obey that promise. The ability to recognize lies is an extremely important factor to determine if someone is innocent or guilty.

To address this problem, researchers at the University of Maryland (UMD) have developed a tool for speech recognition and analysis called DARE. This is a system that uses artificial intelligence to automatically detect lies in experimental videos at trial. The group of computer science researchers at UMD is led by Larry Davis, a researcher at the Center for Automation Research (CfAR).

DARE is taught to be able to search and classify the smallest human expressions, such as "pout" or "frown" , as well as analyze the frequency of sounds to identify speech patterns, to determine whether the person is lying or not. After that, DARE is checked again by analyzing multiple videos - in which the actors are asked to lie or tell the truth.

Picture 1 of Lie detectors will be
DARE can be used in the courtroom to determine who is lying.(Illustration).

However, according to researcher Bharat Singh of UMD, "accuracy" may not be the best word to describe this system. "Some articles have misunderstood about accuracy," he told the press. DARE has outperformed ordinary people in detecting lies. "An interesting finding is that we use visual modules to describe characteristics. A remarkable advantage in our artificial intelligence system is that they are capable of observing. Better than normal people when they find out who is lying , 'Singh said.

DARE's 'score' recorded when detecting a lie is 0.8777 but when accompanied by micro-human manifestations, the score can increase to 0.922. While ordinary people only scored 0.58 points, Singh said. This study will be presented at the Association for the Advancement of Artificial Intelligence, which takes place in February this year.

Find the truth

"The purpose of this project is not only to focus on the videos in the courtroom but also for the AI ​​to predict the deception in an open context , " Singh said. He also noted that DARE may be used by intelligence agencies in the future.

"We are conducting controlled experiments in social networking games, such as Mafia. This is where we easily collect data and evaluate algorithms more broadly. We hope that the algorithms developed in these controlled settings can also generalize other 'scenarios' , Singh said.

According to Mr. Raja Chatilla - Chairman of the Executive Board of the Global Initiative on Ethical Issues in Artificial Intelligence and Automated Systems at the International Institute of Electrical and Electronics Engineers (IEEE), DARE should be Use caution.

"If this is used to decide the fate of people, they need to be considered within the limits and in some contexts, to help people - or judges - make decisions. High probability is "This machine is not entirely accurate because not everyone behaves in the same way. Moreover, there may be prejudices in the data used to train AI ," Chatilla said.

Chatilla has noted that images and face recognition systems are improving. But according to Singh, we can only take three to four years for AIs to detect deception correctly, by reading the emotions behind human expressions.