Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
November 14, 2024 | Latest Issue
The Dartmouth

Julia Dressel '17's thesis receives national attention for crime-prediction research

Software used to predict if a defendant will reoffend may be less accurate than previously believed, according to Julia Dressel ’17’s senior thesis research that has recently received national attention.

Dressel and computer science professor Hany Farid conducted research on risk-assessment software known as COMPAS — short for Correctional Offender Management Profiling for Alternative Sanctions — and discovered that humans were equally as successful as algorithms in predicting defendants’ risk of committing future crimes.

The algorithm is used in various courts throughout the nation to predict if a defendant will commit a crime in the future, specifically within two years, Farid said.

He added the resulting predictions are then used by the courts to determine prison sentences, bail amounts and parole eligibility.

“It’s pretty significant in ways that affect people’s livelihood,” Farid said. “This is not recommending music or movies to you; this is deciding whether you’re going to be incarcerated or not.”

Dressel said she found herself thinking about pursuing a thesis in computer science during her junior spring.

She added that she was looking to integrate her two majors, computer science and women’s, gender and sexuality studies, in order to research technology and bias. After reading a ProPublica report exposing racial bias in the COMPAS software, she was inspired to work with algorithms.

Dressel said she reached out to Farid, who then agreed to research with her as part of her senior thesis. They began testing algorithms, comparing their accuracy with COMPAS and human predictions, she said.

Farid added that they exposed their algorithms to a different number of factors to try to understand how they would make predictions.

Meanwhile, according to Farid, the pair asked people to make predictions on defendants via Amazon Mechanical Turk, an online survey sourcing site in which users are paid to answer questions. These people surveyed were given a paragraph with seven pieces of information on the various defendants, which did not include their race. The participants were asked to predict whether or not each defendant was at high risk of committing a crime within two years.

What they found, Farid and Dressel both said, was surprising. The human predictions were about 67 percent accurate, roughly the same as Dressel and Farid’s algorithms and COMPAS. Both COMPAS and their algorithms’ accuracy were at about 65 percent, Farid said.

According to Equivant, the company that developed and owns COMPAS, the commercial algorithm uses six factors to determine risk, including age, sex and prior convictions, but like Dressel and Farid’s work, does not include race.

Farid and Dressel then decided to test their algorithm using fewer factors to determine risk. They found that by using only two — age and prior convictions — the accuracy of their algorithm remained the same as COMPAS’ and the human predictions.

Through their application of fewer factors and human predictions, they concluded that “if you are young and have committed a lot of crimes, you are high-risk,” Farid explained.

He added that the opposite is also true — older individuals with fewer convictions are considered low-risk.

Because people untrained and unfamiliar with criminal justice are predicting defendants’ risk of committing crimes at the same rate as COMPAS, the overall results led Farid and Dressel to express concern that courts may not have enough information to know how to weigh the algorithm’s predictions.

Despite the findings, Dressel said people may still favor the algorithms because they have the perception of being automatically better than human prediction. She added that people who may not be familiar with how the system works are quick to assume that a machine is more accurate and objective than a person.

Farid added that whether using algorithms or human judgment, predicting the future is very hard, which is why using predictions to determine outcomes like sentences or parole is a bigger issue that needs to be addressed.

“Should you be penalized because an algorithm thinks you may commit a crime in the future?” he said.

Farid also added that the research he conducted with Dressel is not meant to tell courts to stop using algorithms, objective measures and data to sentence defendants. Rather, he said, he and Dressel believe in a state responsibility of transparency, in which if the state is set on using algorithms to determine how much time a defendant serves, then the state should be able to explain to the defendant how the court came to its decision.

Dressel said that her research with Farid is meant to show that the accuracy of algorithms should not be taken for granted.

“We need to test them and regulate them and make sure they are actually accurate before implementing them into the criminal justice system,” she said.


Gabriel Onate

Gabriel or ‘Gabe’ is a ’21 from Los Angeles, CA, who plans on majoring in Government and minoring in Public Policy while at Dartmouth. Gabe joined the D because of his passion for journalism that was fostered in his senior year in high school, where he first began writing news for his school’s paper. In his spare time, Gabe likes to listen to Green Day, devour sushi, and fangirl over Star Wars. He also can’t wait to get cold.