This site uses cookies.
Learn more.

×

This website uses cookies. Some of the cookies we use are essential for parts of the website to operate while others offer you a better browsing experience. You give us your permission to use cookies, by continuing to use our website after you have received the cookie notification. To find out more about cookies on this website and how to change your cookie settings, see our Privacy policy and Terms of Use.

Making Machines Moral

Published March 27, 2019

The Ethical Algorithm


Computer Scientist Aaron Roth discusses what can be done to ensure that machine learning doesn't reinforce bias.

Machine learning is already being used to make important decisions that impact people’s lives. From home loans to healthcare, algorithms are being deployed to analyze massive data sets, identify patterns, and optimize outcomes with minimal human intervention. But algorithms are created by human beings, and are therefore subject not only to the preconceptions and knowledge gaps of the individuals who built them, but also to the biases of society as a whole. And that means they can end up making decisions that reflect or reinforce societal bias.

In this video from our 13th Annual Machine Learning Symposium, the University of Pennsylvania’s Aaron Roth provides a broad perspective on these challenges, and some case studies to illustrate how we might overcome them. If we aspire to live in an ethical and just society, these values must ultimately be embedded in the algorithms that make decisions on our behalf.

Another key challenge for machine learning is how to process language in order to better understand and interact with people. To learn more about this fascinating topic, register for our upcoming "Natural Language, Dialog and Speech (NDS) Symposium."