News

HMS Is Facing a Deficit. Under Trump, Some Fear It May Get Worse.

News

Cambridge Police Respond to Three Armed Robberies Over Holiday Weekend

News

What’s Next for Harvard’s Legacy of Slavery Initiative?

News

MassDOT Adds Unpopular Train Layover to Allston I-90 Project in Sudden Reversal

News

Denied Winter Campus Housing, International Students Scramble to Find Alternative Options

Harvard Releases Guidance for AI Use in Classrooms

University Hall, located in Harvard Yard, houses the offices of top Faculty of Arts and Sciences administrators.
University Hall, located in Harvard Yard, houses the offices of top Faculty of Arts and Sciences administrators. By Zing Gee
By Elias J. Schisgall, Crimson Staff Writer


One year ago today, practically nobody was familiar with ChatGPT. Now, top Harvard academic officials are bracing for a world where artificial intelligence tools are ubiquitous throughout higher education.

The Faculty of Arts and Sciences, Harvard’s largest academic school, released its first public guidance for professors on the usage of generative AI in their courses this summer.

Issued by the Office of Undergraduate Education, the guidance is broad, offering general information on how generative AI works and its potential academic applications. The guidance does not impose an AI policy across the FAS, instead suggesting draft language for three different approaches professors can take toward AI use in their courses: a “maximally restrictive” policy, a “fully-encouraging” policy, and a mixed approach.

Dean of Science Christopher W. Stubbs said in an interview that a “grounding” principle for the guidance was that “faculty have ownership over their courses.”

“I don’t think there is a one-size-fits-all course policy here,” Stubbs said. “What we’re asking the faculty is that they become informed, that they understand the impact this has on the learning objectives for their courses, and then importantly, that they communicate to students clearly and often what their course policy is.”

The FAS guidance also builds on University-wide AI guidelines issued in July, which focused on protecting non-public data. The FAS guidance instructs faculty not to enter student work into AI systems, and Stubbs noted that third-party AI platforms own both users’ prompts and computer-generated responses.

Instead, Harvard University Information Technology is developing an “AI Sandbox” tool in conjunction with third-party AI companies for Harvard affiliates to use, which will debut this month, according to the HUIT website.

“The AI Sandbox offers a single interface that enables access to several LLMs and provides a ‘walled-off,’ secure environment in which to experiment with generative AI, mitigating many security and privacy risks and ensuring the data entered will not be used to train any public AI tools,” Harvard spokesperson Jason A. Newton wrote in an email.

The school also hosted dual informational sessions for faculty about the impact of generative AI in STEM and writing courses, which took place early last month. The sessions, recordings of which are publicly available, detail possible applications of AI as a learning tool, such as real-time information synthesis, code generation, or evaluating arguments. They also suggest strategies to “AI-proof” coursework, such as written exams and multi-step writing processes.

Still, the FAS discouraged professors from using AI detection tools, which Stubbs said were too unreliable for use.

Stubbs said a short-term priority for the FAS is to “verify that the syllabus for courses have a clear articulation of course policy” around generative AI.

“I think it’s essential that we communicate class-by-class to students what the expectations are for how this gets incorporated into the learning goals of the course,” Stubbs added.

Last semester, 57 percent of faculty who responded to The Crimson’s 2023 Faculty Survey reported that they did not have an explicit policy surrounding the use of AI. Despite the FAS’ insistence on the importance of a clear AI policy, many courses from across the school’s divisions still lack such policies this semester.

Among 51 available syllabi for fall semester classes in the Government Department reviewed by The Crimson, 29 lacked any mention of AI use, including 24 undergraduate courses. Among 47 syllabi in the English Department, 20 lacked an AI policy, including 15 undergraduate courses.

In the Molecular and Cellular Biology Department, six out of nine available syllabi for fall courses lacked an AI policy. Out of 27 Computer Science fall courses with available syllabi, six lacked an AI policy, including some where artificial intelligence was itself an object of study.

The AI policies that were present in course syllabi varied widely, with some courses entirely restricting tools like ChatGPT and others permitting their — appropriately acknowledged — use entirely. Many courses detail unacceptable uses of AI, such as answering homework questions, explaining concepts, or writing code, while others wholly forbid AI use except for specific course assignments.

—Staff writer Elias J. Schisgall can be reached at elias.schisgall@thecrimson.com. Follow him on X @eschisgall.

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
CollegeFASFAS AdministrationTechnologyFeatured ArticlesArtificial Intelligence