News
More Than 70 Harvard Students Stage Pro-Palestine Study-In at Widener Library
News
Cyclist Struck by Car in Harvard Square, Suffers Minor Injuries
News
Harvard Will Offer Students Funding to Mend Campus Divisions
News
Garber Will Not Review Harvard’s Investments for Ties to Human Rights Violations
News
Harvard for Harris Travels to Georgia, Maine to Campaign Ahead of Election
This summer, Harvard did the unexpected.
It announced it would provide Harvard College students with access to OpenAI’s ChatGPT Edu, an artificial intelligence tool built for use in academic settings. To see an esteemed institution like Harvard promote ChatGTP as a learning tool seemed antithetical to the University’s values of veritas and purported academic rigor. But, counterintuitively, Harvard’s embrace of AI has the potential to better equip students with the knowledge requisite for success in a changing world.
With a free-use, no-shame ChatGPT system comes a myriad of questions: Can AI write my essays, do my problem sets, or help me with my job interviews? And if so, to what extent?
But it seems that Harvard is doing AI right. Allowing access to AI within an ethical use framework addresses these questions much better than a free-range AI policy might, and encourages students to become more transparent about their AI use more broadly.
Take a recent addition to Harvard Medical School’s course catalog as an example. The course, which is required for all students in the Health Sciences and Technology track, focuses on the emergence of AI in healthcare settings, emphasizing medical skills not found in a textbook.
Harvard College should take notes from the Medical School and begin thinking about ways to introduce AI and its relevance within the undergraduate experience.
Harvard can approach this in two ways.
First, the College could mandate freshmen take a semester-long course about AI in our world with smaller class sizes and a pass-fail grading scheme like a freshman seminar. Multiple variants of the class could offer a wide range of concentrations including AI in healthcare, government, and economics, so that students can take the course most applicable to their career goals.
As AI penetrates writing, science, and math, crucial understanding of AI implementation can better help prepare students for these changes. It is therefore the University’s responsibility to incentivize — if not require — students learn about AI and its role in changing the way we think about higher education, our careers, and our personal lives.
Second, we should incorporate AI modules in many of our current course offerings. Although AI has already been integrated in popular introductory courses through chatbots, the College could enhance a myriad of other courses with AI resources. A life sciences class could discuss acceptable AI use when writing research manuscripts while economics courses could explain AI use in stock trading and market analysis.
By incorporating AI education into the curriculum, Harvard can ensure most — if not all — freshman students are aware of AI’s benefits in academic and professional settings.
No matter what approach Harvard takes in AI education, AI ethics must be emphasized. With Administrative Board Cases nearly doubling in the 2022-2023 school year, it is clear that students need reminders about the importance of academic integrity. Focusing on how to ethically use AI might help mitigate disciplinary problems and provide transparency about the school’s expectations for AI use.
As AI changes our world, we must educate ourselves on its benefits and drawbacks. Let us not wallow in our fear and confusion — instead, let’s harness AI to use it as one of our greatest aids.
Dalevyon L.J. Knight ’27, a Crimson Editorial editor, lives in Adams House.
Want to keep up with breaking news? Subscribe to our email newsletter.