Possibly so, suggest researchers Stephan Ludwig, Tom van Laer, Ko de Ruyter, and Mike Friedman in a new study into automated deception detection by computers. The four researchers have created an algorithm designed to analyze emails and predict whether or not they contain lies — with more accuracy than a human carrying out the same task.
“This is more complicated than your typical text-mining exercise because you’re not just looking for particular keywords,” Ludwig, senior lecturer at the Department of Marketing and Business Strategy at Westminster Business School, told Digital Trends. “Instead you’re looking for how people write when they lie.”
The algorithm does not have access to outside facts. Instead it was created based on rules emerging from academic work into the kinds of language people use when they lie. For example, the algorithm works with the assumption that people who lie often stay away from pronouns, such as “I,” “you,” “she,” or “he,” and second-person pronouns such as “you” and “your.” Instead they use more adjectives, including words like “brilliant,” and achievement words like “earn” and “win.” In addition, liars tend to over-explain their rationalizations: using more “cognitive process” words than people who tell the truth.
“Making this an automated feature, kind of like a Google Translate or a spell check, would be the ultimate use of this technology,” Ludwig continues. “You could imagine it being a plug-in for your email system that alerts you to the probability that an email is lying to you. You could also apply it to political statements, dating websites, insurance claims, or online reviews.”
But he notes that there is still lots of work to be done. In this study, the algorithm only managed to establish the emails which contained lies around 70 percent of the time — compared to 54 percent if a human was responsible for the predictions.
“This is a step in the right direction, but there’s still a significant chance this algorithm will misclassify information,” Ludwig says. “As such it should only be used as an indication. There’s also no knowing how people would react if they knew the companies they were dealing with were having their emails monitored by a lie detection algorithm. It offers up some interesting, and possibly concerning, twists on the idea that you’re innocent until proven guilty. Algorithms like this could lead to a very suspicious atmosphere, which isn’t healthy.”