Machine Learning Distinguished Lecture

  • Gates Hillman Centers
  • Reddy Conference Room 4405
Seminars

Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning

In machine learning often a tradeoff must be made between accuracy and intelligibility: the most accurate models (deep nets, boosted trees and random forests) usually are not very intelligible, and the most intelligible models (logistic regression, small trees and decision lists) usually are less accurate. This tradeoff limits the accuracy of models that can be safely deployed in mission-critical applications such as healthcare where being able to understand, validate, edit, and ultimately trust a learned model is important.  We have developed a learning method based that is often as accurate as full complexity models, but even more intelligible than linear models.  This makes it easy to understand what a model has learned, and also makes it easier to edit the model when it learns inappropriate things.  In this talk I’ll present a healthcare case study where these high-accuracy models uncover surprising patterns in the data that would have made deploying a black-box model risky.  I’ll also briefly show how we’re using these models to detect bias in domains where fairness and transparency are paramount, and how these models can be used to understand what is learned by black-box models such as deep nets.

Rich Caruana is a Senior Researcher at Microsoft Research. Before joining Microsoft, Rich was on the faculty in the Computer Science Department at Cornell University, at UCLA's Medical School, and at CMU's Center for Learning and Discovery.  Rich's Ph.D. is from Carnegie Mellon University, where he worked with Tom Mitchell and Herb Simon.  His thesis on Multi-Task Learning helped create interest in a new subfield of machine learning called Transfer Learning.  Rich received an NSF CAREER Award in 2004 (for Meta Clustering), best paper awards in 2005 (with Alex Niculescu-Mizil), 2007 (with Daria Sorokina), and 2014 (with Todd Kulesza, Saleema Amershi, Danyel Fisher, and Denis Charles), co-chaired KDD in 2007 (with Xindong Wu), and serves as area chair for NIPS, ICML, and KDD.  His current research focus is on learning for medical decision making, transparent modeling, deep learning, and computational ecology.

Faculty Host: Ameet Talwalkar

For More Information, Please Contact: 
Keywords: