Machine Learning Thesis Defense

  • Remote Access - Zoom
  • Virtual Presentation - ET
  • ADARSH PRASAD
  • Ph.D. Student
  • Machine Learning Department
  • Carnegie Mellon University
Thesis Orals

Towards Robust and Resilient Machine Learning

Some common assumptions when building machine learning pipelines are: (1)the training data is sufficiently “clean” and well-behaved, so that there are few or no outliers, or that the distribution of the data does not have very long tails,(2) the testing data follows the same distribution as the training data, and (3) the data is generated from or is close to a known model class, such as a linear model. However, with easier access to computer, internet and various sensor-based technologies, modern data sets are no longer carefully curated and are often collected in a decentralized, distributed fashion.  Consequently, they are plagued with the complexities of heterogeneity, adversarial manipulations, and outliers. As we enter this age of dirty data, the aforementioned assumptions of machine learning pipelines are increasingly indefensible. In this thesis, our goal is modify state of the art ML techniques and design new algorithms so that they work even without the aforementioned assumptions. To address this, we first introduce and define the notion of 3Rs of Machine Learning. For the widespread adoption of Machine Learning, we believe that it is imperative that any model should have the following three basic elements:

  1. Robustness: The model can be trained even with noisy and corrupted data.
  2. Reliability: After training and when deployed in the real-world, the model should not break down under benign shifts of the distribution.
  3. Resilience: The modeling procedure should work under model misspecification i.e. even when the modeling assumption breaks down.

In this talk, we will primarily present some of our results on robustness, where we provide a new class of statistically-optimal estimators that are provably robust to a variety of settings, such as arbitrary contamination, and heavy-tailed data, among others. We complement our statistically optimal estimators with a new class of computationally-efficient estimators for robust risk minimization. These results provide some of the first computationally tractable and provably robust estimators for general statistical models such linear regression, logistic regression, among others. Finally, we provide a brief glimpse into what these results entail for modern machine learning.

Additionally, if time permits, we will present our results on Resilience (or agnostic estimation), where we provide estimators that are adaptive to misspecification in f-divergences.

Thesis Committee
Pradeep Ravikumar (Co-chair)
Sivaraman Balakrishnan (Co-chair)
Larry Wasserman
Sujay Sanghavi (University of Texax at Austin)

Zoom Participation. See announcement.

For More Information, Please Contact: 
Keywords: