SIGMOD 2020 Tutorial:

Fairness and Bias in Peer Review and other Sociotechnical Intelligent Systems

June 18, 2020, 1.30-5pm PDT (including three breaks)

Presenters: Nihar B. Shah and Zachary Lipton

Zoom link

Slack link
Questions of fairness and bias abound in all socially-consequential decision-making. Whether designing the protocols for peer review of research papers, setting hiring policies, designing highway systems, or framing research question in genetics, any data-management decision with the potential to allocate benefits or confer harms raises concerns about who gains or loses that may fail to surface in naively-chosen performance measures.

Data science interacts with these questions in two fundamentally different ways:
(i) as the technology driving the very systems responsible for certain social impacts, posing new questions about what it means for such systems to accord with ethical norms and the law; and
(ii) as a set of powerful tools for analyzing existing data management systems (even those that don't themselves depend on ML), e.g., for auditing existing systems for various biases.

This tutorial will tackle both angles on the interaction between technology and society vis-a-vis concerns over fairness and bias, particularly focusing on the collection and management of data. Our presentation will cover a wide range of disciplinary perspectives with the first part taking a deep into peer review and distributed human evaluations, to explore various forms of bias, such as that due to subjectivity, miscalibration, and fraud; and the second part focusing on the social impacts of technology and the formulations of fairness and bias defined via protected characteristics.

Outline Topic Presenter Time Slides
Part 1 Peer review :
  • Dishonest behavior
  • Noise
  • Miscalibration
  • Subjectivity
  • Biases
  • Norms and policies
Nihar B. Shah 1.30pm PDT
Par 2 Fairness in sociotechnical systems Zachary Lipton 4pm PDT Slides (draft)