Machine Learning Thesis Proposal

  • Remote Access - Zoom
  • Virtual Presentation - ET
  • Ph.D. Student
  • Machine Learning Department
  • Carnegie Mellon University
Thesis Proposals

Towards an Application-based Pipeline for Explainability

As machine learning is used more frequently and for higher risk applications, there is a growing desire to be able to explain how a model makes its predictions rather than simply treating it as a predictive black-box. The field of explainable machine learning has emerged to address this desire.  However, there have been two consistent and related critiques of the field: (1) Specificity -  "explainability'" is not a monolithic concept and, as a result, we must define which specific aspect of a model’s behavior an explanation captures and how capturing that behavior is useful for a specific application and (2) Rigor - there is a lack of rigorous protocols for evaluating explainability methods.  

As a result, we aim to demonstrate how to design rigorous explainability pipelines for specific applications.  To do so, we use three motivating applications for explainability: helping end-users interact with a model, helping domain-experts use a model to discover new knowledge, or helping model-developers debug a model.  Along the way, we demonstrate that Global Counterfactual Explanations are often a useful tool.  In the proposed work, we plan to increase the usefulness of Global Counterfactual Explanations for Model Debugging.

Thesis Committee:
Ameet Talwalkar (Chair)
Zachary Lipton
Carlos Guestrin (University of Washington)
Been Kim (Google Brain)
Marco Tulio Ribeiro (Microsoft Research)

Zoom Participation. See announcement.

For More Information, Please Contact: