Skip to main content

How artificial intelligence could scrap humanity’s ability to lie

“Here it goes: I sped. I followed too closely. I ran a stop sign. I almost hit a Chevy. I sped some more. I failed to yield at a crosswalk. I changed lanes at an intersection. I changed lanes without signaling while running a red light and SPEEDING!”

This line of dialogue–spoken directly to a police officer–is from the 1997 film “Liar, Liar’ starring Jim Carey as Fletcher, a lawyer who chronically lies until his son’s magic birthday wish renders him incapable of telling anything but the truth for 24 hours. Hilarity ensues.

But what if that ‘magic birthday wish’ were real? What if there was an algorithm that could listen to, or read, someone’s statements to sift the lies from the truth? And what if anyone could use it undetected?

This hypothetical scenario is closer to coming true than anyone would believe, according to Steven Hyde, an assistant professor of management in Boise State’s College of Business and Economics. Alongside colleagues at University of Texas San Antonio, Arizona State University and University of Nevada Las Vegas, Hyde conducted a study using artificial intelligence to see if it could distinguish when CEOs were lying to financial analysts. The research study can be viewed at: The tangled webs we weave

By measuring 32 linguistic features that are associated with willful deception, the AI program was able to sort the CEOs’ truth from the lies with up to 84% accuracy.

Hyde says this is only the beginning. Deception detecting AI is going to grow more accurate, and with more research and fine-tuning it will yield both promising and concerning outcomes for the world and society.

graphic art of financial analyst
Image by Mohamed Hassan from Pixabay

Conducting the research

Financial analysts help guide businesses and individuals when investing or expending money, and by doing so exert considerable influence over the stock market. They interact with CEOs of companies and gather information of prospective deals. But like the rest of humanity, their ability to distinguish when they are being mislead is not infallible, and the consequences can be serious.

“The question of the paper was ‘can financial analysts tell when they’re being lied to?’ because they are gatekeepers. We’re expecting them to provide us accurate information, but they’re being co-opted. If they’re unable to tell truth from fiction, then in reality they’re actually causing more harm than good because they are buying into the fraud,” Hyde said.

On average, humans are able to identify when they are being lied to about 47% of the time. In other words, they might as well just flip a coin. By default, humans believe that what they are told is the truth, either by virtue or by simply being unable to prove a falsehood: this is known as the “Truth by Default Theory.” And as Hyde and his team discovered, the financial analysts in this study considered “All Stars”, those with the highest reputations in their field, were actually the worst at realizing when a CEO had lied to them.

graphic art of CEO being given report
Image by Mohamed Hassan from Pixabay

To conduct this research, using data from SeekingAlpha.com Hyde’s team first assembled a sample of CEO call transcripts from Standard and Poor’s 500 companies between 2008 and 2016.  They then created a sample of text from CEOs known to be deceptive. Next, the team trained their algorithm to identify critical patterns of speech in the sample.

Lying has its own studied and documented ‘tells’ that are indicated by their linguistic patterns. Hyde said that two such linguistic ‘tells’ are when the liar begins to refer to themselves less frequently (as a way to distance themselves from the lie), and certain negative words take over the liar’s speech.

“Lying tends to have a high negative effect: you feel a lot of guilt, shame or anxiety that you might get caught. And those negative affective words also pop out into your speech when you lie,” Hyde said.

Finally, Hyde applied the algorithm to all of the transcripts. The algorithm was specifically designed to catch lies that were not little ‘white-lies’ but egregious, dishonest statements that were being wielded specifically to manipulate the listener.

Implications

Lying is an evolutionary trait that is essential to group cohesion. For example, it helps outliers to safely fit into society. It enables leadership to fudge the facts in a way that keeps the group together and optimistic for a particular outcome. But it also can be extremely damaging to society when lies are used to cover up crimes, manipulate others to their detriment, or sway the masses away from an educated, safe course of action.

With AI, the blanket of deceit is inevitably going to be pulled back. While Hyde’s team was able to determine 84% accuracy with their algorithm, they forecast that emerging technology is going to push that needle into the ninetieth percentile. And while the algorithm in this study only analyzed transcripts, other machine learning algorithms will soon be able to read faces and mannerisms too and be able to paint an entire psychological picture of the speaker.

Is humanity ready for that? Hyde doesn’t think so.

“There’s a lot of really positive outcomes that we can have, like CEOs and politicians will have a harder time lying to us now,” Hyde said. “But there’s also a negative cost in our society when we lose a tool for survival. I want this research to get out because I want the public to know ‘this is a thing.’ If you’re lying on a job interview, a few years from now they’ll probably have an AI enabled psychometric measurement on you. If you’re that single parent, for example, lying [in a job interview] to survive, now you can’t live like that.”