Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
November 15, 2024 | Latest Issue
The Dartmouth

Oxford professor Ruth Chang discusses AI and human values

Chang’s lecture, titled “Does AI Design Rest on a Mistake?” proposed a solution to ethical dilemmas posed by artificial intelligence.

04072024_bond_aievent.jpg

On April 4, the philosophy department and the Neukom Institute for Computational Science hosted University of Oxford professor of jurisprudence Ruth Chang for an event titled, “Does AI Design Rest on a Mistake?” Chang spoke about the alignment problem of artificial intelligence and discussed a possible framework for orienting machine behavior more closely toward human values. The event took place in Haldeman Hall, and approximately 50 community members attended. 

According to Chang, the alignment problem refers to challenges in coordinating the development of AI with human values. To minimize the challenges, Chang offered recommendations regarding the tasks AI is given and the agency that humans have over the outcomes of AI processes. 

Chang said her suggestions for reimagining AI were particularly aimed toward machine learning, a branch of artificial intelligence that enables computers to learn without explicitly being programmed. She pointed out a utility distinction between AI used as a tool for human decision-making and AI used as a substitute for human intelligence. 

“Technology that operates as a substitute for human decision-making will continue to be built and indeed proliferate,” Chang said. “We should see our task going forward as figuring out how to make decision-making technology as safe and useful as it can be.”

Chang argued for “chang[ing] the fundamentals of AI design” rather than regulating them. She said correcting two “fundamental mistakes” that AI makes about human values would make artificial intelligence more “safe and effective.”

According to Chang, the first mistake in AI design is that current algorithms for machine learning are “systematically designed to achieve evaluative ends through non-evaluative criteria.” 

Chang said that “evaluative ends” — such as determining the best candidate for a job or a “reasonable” prison sentence for a crime — cannot be obtained through non-evaluative criteria, which do not necessarily involve calculating or making judgments about value. She compared this phenomenon to the myth of King Midas, whose wish for a golden touch brings not prosperity but destruction. According to Chang, AI design yields a similar result.

“You get what you ask for, not necessarily what you want,” Chang said.

According to Chang, the second flaw in current AI design is the “trichotomy” — or the division into three  — of possible AI outputs. Chang said contemporary AI always determines that one option is better than, worse or equally as good as another. Chang argued that there should be a fourth option, in which case the AI defers decision-making to a human — a proposed framework she called “the parity model of AI.”

Chang explained that AI deferring to human judgment permits a “machine-human hybrid intelligence.” In this model, a human user could step in to make the decision to either “commit” or “drift” to their preferred option, she said. A choice of “commit” would allow the human’s preferred outcome to override the AI, while also influencing the AI to make similar decisions in future situations. A “drift” decision, on the other hand, would allow humans to override the AI choice without a long-term effect.

Widespread implementation of the parity model would prevent AI from becoming fully autonomous, Chang said. Although she said a lack of autonomy might seem like a “bug,” she thinks it is “really a feature.”

Philosophy professor John Kulvicki, who attended the event, said he found it “very provocative.”

“When I heard the title and the abstract, I was skeptical,” Kulvicki said. “[But] when I heard the talk I thought, ‘Oh, this is really great and very interesting.’”

Kulvicki added that he was “deeply worried” about AI since “it’s changing so quickly.”

“I just don’t know what it’s going to look like in five months, let alone five years,” Kulvicki said.

Amanda Sun ’23, who is majoring in computer science and has completed a research internship in machine learning, said she found the lecture “very informative.” Sun said AI is “definitely getting hyped up right now,” adding that she thinks general intelligence tools will become similar to internet search engines in the future. 

William Ren, who works in computing at the Thayer School of Engineering, said the lecture introduced “a lot of thought-provoking questions about AI, its use and its creation.”

“I think [AI] is definitely going to become more ubiquitous in people’s lives,” he said.