Professor Martin Puchner is the Byron and Anita Wien Chair of Drama and of English and Comparative Literature. This interview has been edited for length and clarity.
FM: When did you realize that literature was something you wanted to research?
MP: Pretty late. I was not a great reader in middle or high school. I read some literature, but I was preoccupied with other things — sports and whatnot, being a teenager.
When I went off to college, I majored in philosophy — which also involves reading, but of a very particular kind. I think it was only maybe the last year of high school and in college that I started to read much more broadly, widely, and deeply. I came across some of these sort of crazy modernist texts like “Ulysses” that really compelled me.
After college and going into grad school, I started to drift from philosophy to literature.
FM: On your website, you have these AI models that allow users to converse with famous artists and philosophers and changemakers. Out of all the historical figures you could have chosen, how did you decide who to include?
MP: It all started with Socrates. Socrates was the first. The reason is that many years ago, I wrote a book about the history of the philosophical dialogue. At some point with figures like Socrates, but also in other traditions, Confucius and the Buddha, these transformative philosophers — or religious figures, in the case of Buddha — decided not to write treatises. In fact, they refused to write entirely. They only developed their style of thinking through a particular form of conversation. That’s what I wrote about in that book.
Then when ChatGPT emerged, I realized that this is a particular form of conversation. I remembered this earlier interest in these dialogic philosophers, and I thought, ‘Wow, this format really would lend itself to this kind of interaction.’ And then when it became possible to customize GPTs, I thought, ‘All right, I’ll give it a try,’ and it worked amazingly.
FM: Have you ever considered making custom GPTs for someone who’s still alive?
MP: I have not, in part because it has to be based on texts in the public domain.
FM: Maybe you could train it on your own books.
MP: Okay, all right, I’ll admit I did create one. It’s not publicly available, but I did feed it a couple of my books — now, I don’t own the copyright to many of my books, but, early versions, non-edited versions of a couple of my books. Though I have to say, I have not spent a lot of time in conversation with myself, but I tried it, and it works well enough.
FM: Did you really make the Machiavelli chatbot to give leadership advice to President Garber?
MP: It was sort of tongue in cheek, but I did send it to him, and he said he liked it — but no. I actually think Machiavelli gets a bad rap. He really invented politics as we know it. His reputation is as the scheming, backroom kind of figure, but in “The Prince,” he creates basically a theory of modern politics. So I thought, ‘Yes, it would be very interesting to have someone like that.’ So it was more for general purposes, but I’m very glad that our university is in good hands.
FM: I’ve heard a lot of concerns about AI, especially among academics. Was it natural for you to respond to these technologies by experimenting with them, or were you also initially afraid?
MP: I think both. I completely get how people react to this crazy new technology by being afraid. I grew up on the same “Terminator,” “Skynet,” “Frankenstein” stories.
I get the fear, but I’ve become very skeptical of apocalyptic scenarios in general. For me, it started when I was thinking about storytelling and climate change, where I became skeptical of a certain kind of apocalyptic narrative. I think that’s also true of technology. We live at a time, I feel like, where the default is almost apocalyptic thinking. That can, in certain contexts, maybe have limited use — I can see that it might activate a certain kind of person — but on the whole, I think it does more damage than good. In any case, this is maybe one reason why I then was relatively quick to check my own apocalyptic impulses. And I’ve always been interested in technology. I’ve written a lot about the deep history of technology and culture, from the invention of writing to the printing press. I felt like, “Here, this is just happening around me. I should give it a try.”
FM: How often do you use ChatGPT day-to-day?
MP: ChatGPT in general, several times a day — not necessarily my customized GPTs, but yeah, I definitely use it. Mostly as a kind of research assistant, I would say, is the main use.
FM: In a speech you gave, you described AI as something that will transform us as we incorporate it into our lives. I’m curious, as you’ve used AI for recent projects, how you’ve personally been transformed.
MP: I would say that I’ve been — but it’s just an effect of basically having an army of research assistants at my disposal — more confident to wade into areas I don’t know very much about.
FM: What do you think writing courses should look like at Harvard in 15 years?
MP: I’m working on a writing course right now.
I have a job at the Provost’s Office at VPAL [the Office of the Vice Provost for Advances in Learning]. In that capacity, I’ve been working for almost two years on an online writing course that’s trying to do what doesn’t really exist yet — namely, to have a scalable writing course. Scalable in the online lingo means that hundreds of thousands can take it. It doesn’t rely, as writing instruction usually does, on this incredibly expensive, very high-touch model where you have a small class and a teacher who reads draft after draft after draft — as it happens in Expos.
I think it will really help teach potentially a lot of people how to write. And we did it by breaking it into many different steps and trying to figure out in each step how to teach each component and also how to integrate AI — so both how to use AI and when not to use AI. That’s the whole approach.
FM: What advice would you give to current students who are grappling with how to use AI in their writing classes this semester?
MP: I think there is only one use of AI, especially if you’re trying to learn how to write, that’s not good. And that is, just produce a couple of prompts and let it write the first draft. I think everything else is great. It’s great as a search engine. I think it’s really great as a sparring partner. I think a lot of students have trouble incorporating counter-arguments and counter-evidence into their writing.
So there are actually lots of uses, and I’m all for them. The one use where you just push a button and use the first answer it gives — I think that’s the one use where I feel like you would actually cheat yourself because you wouldn’t learn good writing.
FM: Speaking of writing, you have published three books in the last five years. Do you have any book ideas you’ve been holding onto that you hope to write someday?
MP: I’m just shopping around a proposal for a book on AI and culture because it’s been very much on my mind, and because I experimented with it, and because I had written on technology and culture before. That’s my next book project. I think AI raises such interesting questions about what it means to be human, our reliance on technology especially in the arts, creativity, what it means to rely on tools, what it means to be growing up with training data and to what extent do we imitate what we already know, where does novelty and creativity come from — it just raises fascinating questions.
FM: If you were to use one of your philosophers to have a dialogue as you’re writing your book, would you list the philosopher as a co-author?
MP: In fact, for my book proposal, the title right now is ‘Artificially Intelligent: What AI Teaches Us About Human Creativity, by Martin Puchner, written in collaboration with customized chatbots.’ So yes, exactly. Conversations with these chatbots have informed the book — they have formed part of the book — and they are very interesting.
Part of the argument I’m trying to make has to do with the fact that, in some sense, through our reliance on language and writing and math and other technologies, we have been artificially intelligent for a long time. Not necessarily in the same way as AI, but in a way that actually intersects with AI — which is why we were able to create it, and why we’re able to interact with it in the first place. In trying to push that argument and see where the limits of the argument are, I’ve had conversations about that topic with various of my chatbots.
FM: In your most recent book, “Culture: The Story of Us, From Cave Art to K-Pop,” you write about how the humanities are a central driving force to the development of human civilization. Does AI count as a driving force?
MP: It’s definitely one. In this book, “Culture,” I don’t talk as much about technology. I do think about the broader question of how culture gets preserved and transmitted from one generation to the next. I start with cave art and think of cave art as an institution and storage system for culture that allows culture to be passed down. So I think about technology in this very broad sense, mostly in terms of storage and access.
In a previous book, “The Written World,” I had written more specifically about writing technologies. When AI emerged, and when I, as I just confessed, created a bot based on my books, I started to ask that bot the question of what these books would say about AI. What this Martin-Puchner-bot responded, rightly, was saying, ‘Yes, in some sense, AI relies on stored cultural knowledge in different technologies.’
So yes, I think it is absolutely emerging as an incredibly important tool for cultural production. In some sense, that’s what my book is going to be about. It’s definitely one, but it’s one used by humans. In the discussion about AI, thinking about what AI does as if it were this one agent is really a mistake. I think AI and large language models will become a utility — something more like WiFi — that we access and that we customize and use in a million different ways. That’s why my emphasis is on customization of AI.
FM: You’ve talked before about how storytelling shapes the future. Is it difficult to write and publish your work knowing that your work is shaping the future?
MP: Oh, I don’t think about it very much. Of course, if you write, writing is hard, it takes a lot of time, a lot of pain — you do it because you hope it will have some effect. But at the same time, I’m very aware that with any one book or story you tell, who knows how it’s gonna affect the future?
I don’t think I have lots of fantasies about personally shaping the future. I think it’s more, for me, trying to figure out what is happening and drawing some conclusions from it, and giving people tools to think about it themselves.
FM: What works of art have taught you the most about the future?
MP: My mind first goes to the negative, and this goes back to the apocalyptic stories, the works that I’ve become very skeptical about teaching us about the future. This is everything from the Epic of Gilgamesh, which is the first time this apocalyptic story of the flood — that then appears in the Hebrew Bible, in the Christian Bible, in the Quran, and many other texts since then — appears, to the recent Hollywood films, “Mission Impossible,” AI. Those are really misleading. My campaign right now is a kind of negative one — getting away from those stories that have been captivating us.
I think that skeptical or critical impulse is maybe first. I don’t know, I think it’s almost impossible for me to say which texts have taught me the most about the future because I don’t know what the future will be like.
— Associate Magazine Editor Kate J. Kaufman can be reached at kate.kaufman@thecrimson.com.