Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
May 10, 2025 | Latest Issue
The Dartmouth

Former Cornell president Martha Pollack ’79 urges universities to embrace artificial intelligence

Pollack, who is currently a Montgomery Fellow, spoke about the societal impacts, risks and opportunities of AI for universities at an April 21 talk hosted by the Montgomery Fellows.

042125-dre_gonzales-pollackai.jpg

Former Cornell University president Martha Pollack ’79 argued that universities should embrace developments in artificial intelligence in order to win back trust in educational institutions. Pollack spoke about the impact of AI on higher education at an April 21 talk hosted by the Montgomery Fellows program. 

Approximately 60 people attended the event in the Loew Auditorium as part of the Montgomery Fellows’ “Agency, Speech and Ethics in the AI Era” series. Pollack served as president of Cornell from 2017 to 2024 and is currently a Montogmery Fellow. The talk was moderated by former College President Phil Hanlon ’77, who described her as “one of the deepest thinkers about higher education that I know.”

The conversation, introduced by special advisor to the provost for AI James Dobson, spanned topics from the origins of AI to its implications for university teaching, research and policy-making.

Pollack offered a three-part framework for introducing AI to pedagogy: AI literacy, integrating AI into classroom practices and increasing institutional efficiency.  

“We need to teach students how to use AI well, but also when not to use it,” she said. “Changing pedagogy is really hard but necessary.”

Pollack gave examples of how faculty across disciplines are experimenting with AI, from law professors prompting chatbots to simulate jury reactions to using large language models for feedback generation. She emphasized that while automation may reduce some faculty workload, the student-professor relationship remains central.

“We’re social animals,” Pollack said. “You don’t go to Red Hawk for the beer — you go for the people. What’s true at the bar is true on campus.”

Pollack also expressed concern over the rising cost of higher education and declining public trust in universities, noting that AI might offer tools to help institutions remain accessible and relevant.

“If the AI education costs 50 cents, and the Dartmouth education costs $50, we risk pricing ourselves out of the market,” she said.

Pollack, a computer scientist, also reflected on her time as an undergraduate at Dartmouth when computer science offerings were limited. She later earned a Ph.D. in computer science and became a prominent voice in natural language processing and cognitive assistance.

“By the time I was a student here in the late ’70s, there was no computer science department,” Pollack said. “There were very, very few computer science courses. I took both of them.”  

Pollack highlighted the critical role of computing scale and data in recent breakthroughs, such as better sensors for collecting data. 

“What really changed was scale,” she said. “We have so much more computational power, storage and data — and the internet itself acts as a massive sensor.”

Pollack also warned against simplistic comparisons between AI and the human mind. Citing computer scientist Richard Sutton’s work, Pollack explained that most advances in AI come from harnessing raw data and computing power rather than mimicking human reasoning.

“We should stop trying to find simple ways to think about the contents of minds,” Pollack said. “These things are just stochastic parrots — and yet, they’re astonishingly good at it.”

In the Q&A session, Pollack was asked what most concerned her about AI’s rise. She cited the potential for authoritarian surveillance enabled by machine learning.

“AI brings the emperor closer,” Pollack said, referencing the erosion of privacy. “That’s what keeps me up at night.”

Pollack closed with a call to action for universities — particularly faculty — to lead conversations on AI’s ethical and cultural implications.

“What does it mean to be human or to be humane?” she said. “This is the moment for the humanities to shine — but we need deep, rigorous thought, not just critique.”

Attendee Addison Park ’28 said the event prompted her to think about AI in a “much bigger context.”

“She didn’t pretend to have all the answers, which made her even more compelling,” Park said 

The Montgomery Fellows program will continue its AI-themed series this spring, bringing a range of experts to campus to explore technology’s evolving place in society.