News

HMS Is Facing a Deficit. Under Trump, Some Fear It May Get Worse.

News

Cambridge Police Respond to Three Armed Robberies Over Holiday Weekend

News

What’s Next for Harvard’s Legacy of Slavery Initiative?

News

MassDOT Adds Unpopular Train Layover to Allston I-90 Project in Sudden Reversal

News

Denied Winter Campus Housing, International Students Scramble to Find Alternative Options

Editorials

AI Ethics: Not Eventually, but Now

By The Crimson Editorial Board
This staff editorial solely represents the majority view of The Crimson Editorial Board. It is the product of discussions at regular Editorial Board meetings. In order to ensure the impartiality of our journalism, Crimson editors who choose to opine and vote at these meetings are not involved in the reporting of articles on similar topics.

The name “Governance of Emerging Technology and Technology Innovations for Next-Generation Governance” sounds a bit like someone asked ChatGPT to come up with a phrase that spells out GETTING.

However, apart from the somewhat entertaining acronym, the launch of GETTING-Plurality by Harvard’s Edmond and Lily Safra Center for Ethics is nothing to laugh at. Together with the Harvard AI Safety Team, a student organization founded last spring, these new initiatives represent a welcome enterprise undertaken by Harvard affiliates to tackle the myriad concerns arising from the recent boom in AI and adjacent technologies.

And what a boom it has been! The past couple months’ headlines have been saturated with news of AI development, from GPT-4’s launch to Midjourney version 5 to Google and Microsoft announcing AI integration in their services.

The necessity of conversations around AI usage and how to ethically govern tech integration seems undeniable, and it’s heartening to see the ways in which the academic world is confronting this challenge. The interdisciplinary nature of groups like GETTING-Plurality — bringing together philosophers, legal scholars, computer scientists, and more — is especially commendable.

However, the moral quandaries facing AI advancement are far from being fully addressed.

AI ethics is a problem here and now. GETTING-Plurality and HAIST seem to focus more on long-term issues in the dangerous future deployment of AI. Though these potential existential threats emanating from AI are formidable, immediate concerns are even more pressing.

At this very moment, AI is being used to police, surveil, and discriminate, in biased ways that further harm already disadvantaged demographics. The philosophy of longtermism, with its ideological ties to eugenics, will not cover this current violence. We call for a newcomer into the still-developing space of AI ethics at Harvard to substantively work on mitigating AI’s destructive effects on marginalized communities right now.

We also find it alarming that much groundbreaking AI research is coming from private companies who are largely motivated by profits. This can result in a mentality focused on building quickly and recklessly in order to compete with the other companies, leaving less room for ethical consideration. This sad state of affairs is deeply worrisome, especially given all the potential for malign abuses of AI technology. Any research into AI ethics must contend with the realities of this market-driven industry.

Finally, institutions must use their authority to shape the trajectory of AI’s development, guiding it along an ethical path. Similar to review boards that approve human subject research, Harvard should create an AI research institutional review board, which will use a cost-benefit framework to evaluate concerns about bias and other harms. Additionally, we call on Congress to create a regulatory agency to ensure the morality of new AI technologies.

As Moore’s law makes way for specially designed AI acceleration, ethics research is becoming increasingly important. With many technologies trained to learn from experience, such research is not only relevant in anticipation of some distant dystopian future — it is vital today, right now.

This staff editorial solely represents the majority view of The Crimson Editorial Board. It is the product of discussions at regular Editorial Board meetings. In order to ensure the impartiality of our journalism, Crimson editors who choose to opine and vote at these meetings are not involved in the reporting of articles on similar topics.

Have a suggestion, question, or concern for The Crimson Editorial Board? Click here.

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
Editorials