How to Advance Commonsense Artificial Intelligence
Professor and AI researcher Yejin Choi wants to build machines with “commonsense intelligence.” What is commonsense intelligence and how is she doing this?
Published June 11, 2019
By Robert Birchard
Academy Contributor

Natural language processing is a branch of artificial intelligence (AI) that studies the interactions between computers and human languages. Yejin Choi, associate professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and Senior Research Manager at the Allen Institute for Artificial Intelligence’s MOSAIC project wants to build machines with commonsense intelligence. Dr. Choi recently spoke about her research and what she means when she says common sense.
What is the focus of your research?
My research addresses some of the fundamental limits of AI, modeling common sense that humans have but today’s AI lacks. Specifically the inability of AI to navigate previously unseen situations or perform generalized tasks by relying on memory or external knowledge. Machine learning today is very task specific and not very efficient—models work really well for only one purpose because they lack the general knowledge of the world.
How do you define common sense?
Common sense is the basic level of practical knowledge and reasoning capabilities concerning everyday situations and events that are commonly shared among most people. For example, if we forget to close the fridge door, then we can anticipate that the food inside will spoil. Common sense is essential for humans to live and interact with each other in a reasonable and safe way. As AI becomes increasingly important in human life, it is crucial for AI to understand and reason about this fundamental component of human intelligence.
What differentiates human intelligence from AI?

One of the fundamental differences between human intelligence and AI is our understanding of how the world works and our ability to reason based on that understanding of how the world works.
AI excels at understanding taxonomic knowledge like whether a penguin is a bird, or aspects of encyclopedic knowledge, like whether Washington is located in the United States. However, it struggles to reason about everyday common-sense situations, for example, if you need to break a window, it’s better to use a hard and heavy object like a bicycle lock than a soft lightweight object like a teddy bear. This type of knowledge is difficult to process because people don’t explicitly state, that bicycle locks are harder and heavier than teddy bears, and it’s difficult for machines to learn this just by reading text and processing language patterns.
The goal of my research is to acquire implicit knowledge for AI and construct a commonsense knowledge graph, which we will then use to build a deep learning representation that acts as external memory. This external memory can be used in other applications to enable faster learning based on less data.
How do you teach artificial intelligence to reason implicitly?
I’m most excited about a new deep learning model that transfers representation between language and knowledge. A lot of knowledge is within the language. Nobody says, ‘My house is bigger than me,’ but if I did say that, you would understand my meaning. Our research involves a new language written for common sense which everybody can understand and evaluate. This isn’t the natural language read in textbooks or spoken in daily dialogues, but it’s still technically a natural language, albeit a bit outside the scope of the usual use of language.
Between natural language and commonsense language, there’s a significant overlap in the words and phrases that represent meanings which allows us to perform transfer-learning. We can use both typical language model training data and our new machine commonsense data. That’s a big plus because today’s deep learning based neural language models are trained on enormous datasets, so that even though our machine commonsense dataset is very large in scale, it doesn’t match the scale of typical language model training data.
What are the benefits of your research?
By addressing the lack of common sense in deep learning or AI systems, we can advance the scope of how much an AI system might be able to perform. With the understanding of implicit knowledge we can teach new tasks with less data. Along with other researchers, our goals are improving performance against benchmarks for measuring common sense and also reducing the amount of training data needed to address a range of tasks.
Can AI be taught human characteristics like empathy or curiosity?
Understanding social common sense knowledge and reasoning capabilities will improve a machine’s ability to simulate empathy, but it’s only mimicking what humans feel. AI doesn’t have feelings. Improved common sense will help AI better understand how humans might feel about a given situation, but it won’t instill AI with these characteristics.
Learn more about AI news, events, and programming at the Academy.