Skip to main content

Blog Article

Ethical Implications in the Development of AI

Published November 21, 2023

By Nick Fetty

Betty Li Hou, a Ph.D. student in computer science at the New York University Courant Institute of Mathematical Sciences, presented her lecture “AI Alignment Through a Societal Lens” on November 9 at The New York Academy of Sciences.

Seminar attendees included the 2023 cohort of the Academy’s AI and Society post-doctoral fellowship program (a collaboration with Arizona State University’s School for the Future of Innovation in Society), who asked questions and engaged in a dialog throughout the talk. Hou’s hour-long presentation examined the ethical impacts that AI systems can have on societies, and how machine learning, philosophy, sociology, and law should all come together in the development of these systems.

“AI doesn’t exist independently from these other disciplines and so AI research in many ways needs to consider these dimensions, otherwise we’re only looking at one piece of the picture,” said Hou.

Hou’s research aims to capture the broader societal dynamics and issues surrounding the so-called ‘alignment problem,’ a term coined by author and researcher Brian Christian in his 2020 book of the same name. The alignment problem aims to ensure that AI systems pursue goals that match human values and interests, while trying to avoid unintended or undesirable outcomes.

Developing Ethical AI Systems

As values and interests vary across (and even within) countries and cultures, researchers are nonetheless struggling to develop ethical AI systems that transcend these differences and serve societies in a beneficial way. When there isn’t a clear guide for developing ethical AI systems, one of the key questions from Hou’s research becomes apparent: What values are implicitly/explicitly encoded in products?

“I think there are a lot of problems and risks that we need to sort through before extracting benefits from AI,” said Hou. “But I also see so many ways AI provides potential benefits, anything from helping with environmental issues to detecting harmful content online to helping businesses operate more efficiently. Even using AI for complex medical tasks like radiology.”

Social media content moderation is one area where AI algorithms have shown potential for serving society in a positive way. For example, on YouTube, 90% of videos that are reviewed are initially flagged by AI algorithms seeking to spot copyrighted material or other content that violates YouTube’s terms of service.

Hou, whose current work is also supported by a DeepMind Ph.D. Scholarship and an NSF Graduate Research Fellowship, previously served as a Hackworth Fellow at the Markkula Center for Applied Ethics as an undergraduate studying computer science and engineering at Santa Clara University. She closed her recent lecture by reemphasizing the importance of interdisciplinary research and collaboration in the development of AI systems that adequately serve society going forward.

“Computer scientists need to look beyond their field when answering certain ethical and societal issues around AI,” Hou said. “Interdisciplinary collaboration is absolutely necessary.”


Academy Communications Department
This article was written by a member of the Academy Communications team.