Research Theme:
|
All scientific and social disciplines are faced with an ever-increasing demand to analyze datasets that are unprecedented in scale (amount of data and its dimensionality) as well as degree of corruption (noise, outliers,
missing and indirect observations). Extraction of meaningful information from such big and dirty datasets requires achieving the competing goals of computational efficiency and
statistical optimality (optimal accuracy for a given amount of data).
My research goal is to understand the fundamental tradeoffs between these two quantities, and design algorithms that can learn and leverage inherent structure of data in the form of clusters, graphs, subspaces and manifolds to achieve such tradeoffs.
Additionally, I am investigating how these tradeoffs can be further improved by designing interactive algorithms that employ judicious choice of
where, what and how data is acquired, stored and processed. The vision is to introduce a new paradigm of intelligent machine learning
algorithms that learn continually via feedback and make high-level decisions in collaboration with humans, thus
pushing the envelope of automated scientific and social discoveries.
My research has been supported by grants from NSF, AFOSR, and NIH, including NSF CAREER and the AFOSR Young Investigator Award.
I was also the recepient of the A. Nico Habermann Career Development Chair award from 2013-2016.
My publications are
available here.
|
Bio:
|
I received my B.E. in Electronics and Communication Engineering from the
University of Delhi in 2001,
and M.S. and Ph.D. degrees in Electrical Engineering from the University of Wisconsin-Madison in 2003 and 2008, respectively.
I was a Postdoctoral Research Associate at the Program in Applied and
Computational Mathematics at Princeton University from 2008-2009 before joining Carnegie Mellon.
Detailed CV (pdf) - Updated: 12/06/21
|
|