Reviewers in peer review are often miscalibrated: they may be strict, lenient, extreme, moderate, etc. Various attempts have been made to calibrate reviews in conference peer review, but they are hampered by the critical bottleneck of a small number of samples (reviews) per reviewer. To increase the sample sizes, we consider using exogenously obtained information about reviewers' calibration, such as data from past conferences. The problem with this approach is that it may compromise the privacy of which reviewer reviewed which paper. We formulate this problem as that of calibrating reviews while ensuring privacy. We undertake a theoretical study of this problem under a simplified yet challenging model involving two reviewers, two papers, and a MAP-computing adversary. Our main results establish the Pareto frontier of the tradeoff between privacy and utility (accepting the better papers), and design computationally-efficient algorithms that are Pareto optimal. Our work provides a foundation for future research to address the important problem of miscalibration on a larger scale.
Nihar Shah (Co-Chair)
Weina Wang (Co-Chair)
Zoom Participation. See announcement.