Making Machines Moral
Published March 27, 2019
Machine learning is already being used to make important decisions that impact people’s lives. From home loans to healthcare, algorithms are being deployed to analyze massive data sets, identify patterns, and optimize outcomes with minimal human intervention. But algorithms are created by human beings, and are therefore subject not only to the preconceptions and knowledge gaps of the individuals who built them, but also to the biases of society as a whole. And that means they can end up making decisions that reflect or reinforce societal bias.
In this video from our 13th Annual Machine Learning Symposium, the University of Pennsylvania’s Aaron Roth provides a broad perspective on these challenges, and some case studies to illustrate how we might overcome them. If we aspire to live in an ethical and just society, these values must ultimately be embedded in the algorithms that make decisions on our behalf.
Another key challenge for machine learning is how to process language in order to better understand and interact with people. To learn more about this fascinating topic, register for our upcoming "Natural Language, Dialog and Speech (NDS) Symposium."