Peer review is the backbone of scientific research. It is however faced with a number of challenges which cause unfairness to authors, and degrade the overall quality of the process. This talk will present principled and practical approaches to battle these demons in peer review:
(1) Subjectivity: How to ensure that all papers are judged by the same yardstick?
(2) Mis-calibration: How to use ratings in presence of arbitrary or adversarial mis-calibration?
(3) Bias: How to rigorously test for existence of (gender/fame/race/...) biases in peer review?
(4) Strategic behavior: How to insulate peer review from strategic behavior of author-reviewers?
(5) Noise: How to assign reviewers to papers to ensure fair and accurate evaluations under review noise?
Nihar B. Shah is an Assistant Professor in the Machine Learning and Computer Science departments at Carnegie Mellon University (CMU). His research interests include statistics, machine learning, information theory, and game theory, with a focus on applications to learning from people. He is a recipient of an NSF CAREER Award 2020-25, the 2017 David J. Sakrison memorial prize from EECS Berkeley for a "truly outstanding and innovative PhD thesis", the Microsoft Research PhD Fellowship 2014-16, the Berkeley Fellowship 2011-13, the IEEE Data Storage Best Paper and Best Student Paper Awards for the years 2011/2012, and the SVC Aiya Medal 2010, and has supervised the Best Student Paper at AAMAS 2019.
Zoom Participation Enabled. See announcement.