To content
RC Trustworthy Data Science and Security

Prof. Nils Köbis researches lying algorithm

-
in
  • Research Alliance Ruhr
  • UA Ruhr
  • Research
Woman scans the face of a man sitting opposite her with her smartphone, the word "Liar" appears on the screen © Genereated by DALL-E
What are the consequences if AI can be used in the future to expose lies?
What would a world be like in which lies no longer paid off? Artificial intelligence should expose them in future. Professor Dr. Nils Köbis is working with a European research group to develop an algorithm and investigate how automated lie detection could affect our society.

Most people are not good at recognizing lies. What's more, falsely accusing someone of lying can have unpleasant social consequences and requires careful consideration. In most everyday situations, we therefore assume that the other person is telling the truth. This is why lie detectors are sometimes used in legally relevant situations - modern variants can process natural language and are more accurate than average in the text-based search for fake reviews or spam.

Professor Dr. Nils Köbis from the Research Center for Trustworthy Data Science and Security at the University Alliance Ruhr, together with colleagues from Würzburg, Berlin and Toulouse (France), has also developed an algorithm that detects lies much better than humans can. The scientists studied 2,040 test subjects from the USA to see how they dealt with the AI predictions. Did they accuse their counterparts of lying when the AI gave them corresponding clues? The scientists examined how 2,040 test subjects from the USA dealt with the AI predictions. Did they accuse their counterparts of lying when the AI gave them clues? "Most of them were initially hesitant to use the algorithm. Those who agreed with the AI's advice mostly followed the recommendation - even if it meant that the other person was lying," explains Köbis.

What does this mean for our coexistence if AI can expose lies more reliably than humans, but is also occasionally wrong? "The widespread availability of lie detection algorithms could lead to more mistrust in society. Because those who generally support this technology are also more likely to make accusations," explains psychologist Köbis, adding: "But it could also promote honesty in our communication and negotiations." It is therefore important to regulate the use of AI by law. "Politicians should adopt measures to protect human privacy and promote the responsible use of AI, especially in healthcare and education," says Köbis.

More information: