Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
November 21, 2024 | Latest Issue
The Dartmouth

Q&A with inaugural special advisor to the Provost for AI James Dobson

Dobson, who began his role on July 1, will advise Provost David Kotz on artificial intelligence at the College.

20240709-James-Dobson-kl-43.jpg

 

On July 1, English professor and Writing Program director James Dobson was named special advisor to the Provost for artificial intelligence. The digital humanities expert — whose knowledge spans both STEM fields and the humanities and social sciences — will advise Provost David Kotz throughout the 2024-2025 academic year. The Dartmouth sat down with Dobson to discuss his background, his vision for artificial intelligence at the College and his responsibilities within his new position. 

Can you tell me a little bit about what your new position entails?

JD: It’s a one year position to advise the Provost on what Dartmouth should do with AI for the entire institution. I have a charge to talk to people across the entire institution. I’ve been speaking with folks at the Geisel School of Medicine, at the Tuck School of Business and at the Thayer School of Engineering — which is somewhat unusual for me as someone in Arts and Sciences, in the humanities — but that’s been great. So far, I have been trying to develop a bit of a vision of what AI might mean at Dartmouth and how we can position ourselves as a leader in this space.

At Dartmouth, you work as a professor in the English and creating writing department — not in a STEM-related discipline. What is it about your expertise that made you the best person for the role? 

JD: I work on the cultural and historical critique of technology. It’s what I’ve been doing for a while. I’ve long been researching machine learning, which is pretty much another name for AI. I’ve written books on the subject. I teach classes. 

I think I was picked as someone who could speak both to the engineering and science members of our community, as well as to those in the humanities and social sciences — especially those who might be skeptical about AI. I myself am. I don’t think it’s useful everywhere. I think there are issues with it. I’m a critic of it. So, I think I was picked as someone who can speak broadly to a number of different positions that people might have about AI.

What is the future of AI at Dartmouth? Is the technology a positive or negative contribution to education? 

JD: I think that’s always going to be situational. That situational aspect of it presents a really confusing and big problem for students who are moving between classes where the policies on AI use differ. In one class, you might be encouraged to use generative AI. You might even have activities and assignments that incorporate AI. You might be in another classroom where the lines are a little unclear: You can use it for some things, but this essay needs to be written entirely by you, with no generated text. You might be in another class that allows absolutely no usage, and if you’re suspected of using AI, you find yourself in a Committee on Standards hearing. That presents a problem for trying to understand where AI use is appropriate and where it isn’t. 

Cornell University has come up with a scheme for helping clarify this — that Yale University has recently suggested they also adopt — that I think makes a lot of sense. The policy involves a very clear statement on every syllabus that says what the AI policy is in that class. Cornell and Yale recommend a sort of traffic light — a red, yellow and green scheme that dictates where AI use is not allowed and where it’s allowable, either with proper documentation on assignments or fully usable wherever you want. That makes a lot of sense to me. 

What do you think will be the most fundamental changes to education at Dartmouth as a result of AI? 

JD: The changes that we’re going to see, potentially, might not have to do with AI itself, but rather the return to some different practices that we’ve had in the past. If you have an instructor that still is very invested in the essay — like in the humanities — you may start writing more and more frequently. You may be revising papers more frequently with your peers. You may have some classes even in the humanities where you’re not producing just a written text any more but presenting material. 

How do potential uses of AI vary across disciplines?

JD: Surprisingly, in some of the early discussions about AI, I found myself sitting next to some computer scientists and engineers and they were even more concerned than folks in the humanities about AI. If you’re teaching someone how to program, it turns out that really structured language — like a programming language — is far easier to generate than text. In general, I do think there will be more use of generative AI, or more openness to it, in the STEM disciplines.

Your position only lasts for one year — the 2024-2025 academic year. Why is the role structured this way, and what is next for AI at Dartmouth after your tenure concludes?

JD: The idea is to generate some momentum, develop a vision that can be put into place and connect people across the campus. I fully anticipate continuing to participate in those kinds of conversations — as well as organizing workshops and symposiums and teaching classes.

There’s been a ton of groundwork before I was appointed. Lots of people in the Office of the Provost and elsewhere have been doing work. They handed off to me reports, proposals and budgets. I’m going to look at all of that and put together a vision, and hopefully that can guide us into the next period — which, if we look at the past couple of years, is going to be quite a challenge. 

This interview has been edited for clarity and length.