Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
April 8, 2025 | Latest Issue
The Dartmouth

Yale ethicist warns that AI-generated content threatens human creation

The Montgomery Fellows Program kicked off a new series on artificial intelligence with a lecture from Yale ethicist Luciano Floridi.

2-28-25-michaelbond-lucianofloridi.jpg

On Feb. 27, the Montgomery Fellows Program hosted Yale University Digital Ethics Center founding director Luciano Floridi for a talk titled “AI and the Future of Content.” Floridi’s lecture focused on the importance of maintaining human-made content in a world that is becoming increasingly reliant on artificial intelligence.

The talk kicked off the program’s “Agency, Speech and Ethics in the AI Era” series, which aims to bring experts to campus for AI-related discussions on “language, freedom of expression and agency,” according to the Montgomery Fellows Program website. The Montgomery Fellows Program operates with the mission “to bring outstanding luminaries from the academic world as well as from non-academic spheres to campus.”

At the start of the event, Montgomery Fellows Program director Steve Swayne introduced both Floridi and special advisor to the Provost for AI  James Dobson, who served as the talk’s moderator.

Floridi began by discussing the “post-Vitruvian world” — a reference to Leonardo da Vinci’s drawing, “Vitruvian Man,” which depicts a “human-centric way of understanding the universe.” Floridi explained that da Vinci’s painting provides a strong analogy for the world we used to find ourselves in, as the current world is no longer human-centered. 

“[It] has been inconceivable for millennia that if you find any content anywhere, that content had not been produced by anything else but a human being,” Floridi said. 

Floridi then shifted focus, discussing how all manner of content has evolved in light of AI’s rise. 

“Content is no longer necessarily human product,” he said. “[Humans] have not been at the center of anything for some time.”

To illustrate his point, Floridi showcased a collection of AI-generated content, including an image of a cup of coffee being poured, a voice presentation of his slides and a gospel song. The exercise was meant to demonstrate that there is “no human being” behind any of the images, he explained. 

There are a variety of consequences associated with what he has identified as a lack of human-created content including copyright on training data (input), IP on produced content (output), plagiarism, reliability, the question of “what is the future of content and the uniqueness of human output. 

Floridi added that some people favor the success of AI and technology over that of humans by “enveloping the world around the machines.” 

In his opinion, this change to focus on machines was “perfectly fine as long as you don’t forget all that is for human purposes,” Floridi said. He then urged the audience to reconsider the value of human work, particularly when compared to AI-generated output.

“If you are assessing yourself in terms of the output, you are quite useless,” Floridi said. ”It’s the process that matters.” 

Floridi concluded his lecture by calling for less consumerism, fewer monopolies on what he termed “semantic capital” — those who produce content — and less synchronization, which he defined as when or where content is consumed and by whom. 

“We need to understand [content] more profoundly … because we don’t at the moment,” he said. “The study of content studies is fragmented across a million different disciplines… we need to step back and say is there a unified picture here? Is it worth having a unified picture?” 

In an interview after the event, Floridi said he believes content production represents “probably the biggest change that AI is going to generate for [humanity].” 

“This is a turning point,” he said. “We’ve never seen in our history machines producing content that you cannot distinguish from human content.” 

As a result of AI content creation, Floridi added that companies are “no longer interested” in content moderation. He explained that such a loss of interest can lead to an oversaturated infosphere — or an environment consisting of information — which may disrupt the infosphere “ecology.” 

“There’s a lot of bad content around, and we should keep our infosphere clean,” Floridi said. “If you poison this content, if you let it go down the wrong road, if we are not ecological and careful about it, it will destroy our society.”

In an interview after the event, Dobson explained that he wanted to find speakers for the series that were “focused on these major concerns about what ability we have to make decisions, agency questions and what it means to use tools that might limit speech or shape it in some way.”

“It’s very clear that we’re all focused on the university and what this means for the university and Dartmouth,” Dobson said. “These questions [of how and what professors should be teaching] are going to come up in every lecture of ‘What should we be teaching?’ [and] ‘How should we be teaching?’” 

Italian professor Tania Convertini said she was drawn to attend the event because she follows Floridi on social media. 

“When I heard he would be a Montgomery Fellow I was extremely excited,” she said. Convertini was also able to get Floridi to visit and teach in her class focused on language pedagogy — FRIT 31, “How Languages are Learned.”

Convertini added that the talk stressed “the uniqueness of the human process,” and that “we need to really put an emphasis on” the input rather than the output. According to Convertini, she strongly believes in the “idea of the process” and applies it to all she does in her life and in the classroom.

The series is set to continue in the spring with Cornell University president emerita Martha Pollack ’79, philosopher and legal theorist Sylvie Delacroix and media scholar and science historian Orit Halpern ’94.