Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
December 24, 2024 | Latest Issue
The Dartmouth

Thayer researchers develop method of detecting lies

11.12.19.news.Thayer_ElsaEricksen.jpg

Researchers from the Thayer School of Engineering recently developed a new way of estimating a speaker's intent.

Researchers at the Thayer School of Engineering have developed a new framework for detecting deception. In an article recently published in the Journal of Experimental and Theoretical Artificial Intelligence, co-authors Deqing Li, a former Thayer Ph.D. student, and engineering professor Eugene Santos Jr. proposed a model which uses patterns of reasoning to better capture speaker intent. 

Li, who developed the approach while completing her Ph.D. thesis at Thayer, said that the theoretical framework arose out of her work on knowledge-based systems, which she said allows researchers to “simulate the knowledge in a person’s brain.”

“Basically, we were trying to figure out how can we simulate humans’ reading process or simulate the dynamics between different groups in a society using a sort of knowledge based system,” Li said. “Deception detection [was] one application for this.”

Functionally, Li said that the model she proposes uses chained conditional statements to gauge the probability that an initial claim is true.

“For example, I can say I walked my dog today because the weather is good; so that’s an if-then rule: If the weather is good, I walk my dog,” Li said. “So we can encode that kind of if-then rule into that knowledge graph.”

According to Li, the initial claim — in this case, that someone walked their dog — allows you to make inferences about the probability that other events will occur, such as whether the weather is good on that particular day, or whether that person went to the grocery store.

“You can look at that model and see which nodes are inconsistent with each other, which information is inconsistent, and then that will give you a number of comparisons,” Li said.

Santos, who specializes in computer engineering with a focus on human behavior, expanded on this example.

 “If ... there’s a tornado outside and it’s thunderstorming, it’s unlikely I’ll be walking a dog,” Santos said. 

Based on external factors like these and a close empirical examination of the claimant’s history, Santos said that the probability that the story or claim is true can then be extrapolated.

This example uses a singular correlation to predict deception, but Li says that ideally, the knowledge-based system that can detect deceptive intent would include “hundreds of thousands of these rules chained together.”

While their study proposes a theoretical model, both Li and Santos expressed optimism in the future practical application of their framework. Li, who graduated from her Ph.D. program six years ago, said that while the technologies at the time she was conducting her research were not very mature, the strategies and technologies available to researchers nowadays are much more advanced.

Santos noted that there are still technical limitations. 

“When someone tells you a story, how would you make the computer pull out all the arguments?” Santos said. “How would you make a computer pull out all of the sentiments of that? We have algorithms nowadays for natural language processing to do it, but it doesn’t go far enough, it’s still an open challenge. That’s why it’s more on the theoretical side.”

Both researchers also acknowledge data collection limitations. For Santos, the biggest hurdle on this front is figuring out how to collect sufficient data from people who deceive.

Government professor Brendan Nyhan,who specializes in the study of misinformation and misperceptions and is the founder of one of the first online, nonpartisan fact-checking sites, expressed another more general concern about detection tools like the one Santos and Li propose.

“The challenge with all detection tools is that malicious actors adapt — there can be a kind of arms race dynamic,” he said. “You apply a technology to identify dubious or fraudulent content, and then people figure out what you’re screening for and start adapting and changing their approach. So there’s a kind of a whack-a-mole element to this that’s challenging to overcome.”

While Nyhan did express some concerns, he was also optimistic about the research.

“It’s exciting seeing technologists working to address these problems,” Nyhan said. “The problem of misinformation spans multiple disciplines, and we need engineers and computer scientists working alongside social scientists and other kinds of researchers to address the problem.”

One of the primary challenges Nyhan said he sees in combating misinformation and deception is the quantity of unverified information available in the digital age. He believes that the best approach for tackling this issue lies in a symbiotic relationship between humans and detection technologies.

“It’s that combination of computational screening and human intelligence that is likely to be the most effective approach,” Nyhan said.


Coalter Palmer
Coalter (‘22) is a news writer for The Dartmouth. As a government major also pursuing minors in computer science and psychology, he enjoys writing about technology, on-campus research and politics. On campus, Coalter sings for the Dartmouth Brovertones, has taught drill in the German department and does research in the government department.