TheWebConf 2020 Tutorial:
Fairness and Bias in Peer Review and other Sociotechnical Intelligent Systems

Nihar B. Shah and Zachary Lipton

China Standard Time (UTC/GMT +8): 9am Monday, April 20
Eastern time: 9pm on Sunday, April 19
Pacific time: 6pm on Sunday, April 19

Venue: Zoom meeting link
Zoom password and connection instructions: TheWebConf sent it to all registrants via email
Questions of fairness and bias abound in all socially-consequential decision-making. Whether designing the protocols for peer review of research papers, setting hiring policies, or framing research questions in genetics, any decision with the potential to allocate benefits or confer harms raises concerns about *who* gains or loses that may fail to surface in naively-chosen performance measures.

Data science interacts with these questions in two ways:
(i) as the technology driving the very systems responsible for certain social impacts, posing new questions about what it means for such systems to accord with ethical norms and the law; and
(ii) as a set of powerful tools for analyzing existing systems (even those that don’t themselves depend on ML), e.g., for auditing existing systems for various biases.

This tutorial will tackle both angles on the interaction between technology and society vis-a-vis concerns over fairness and bias. Our presentation will cover a wide range of disciplinary perspectives with the first part focusing on the social impacts of technology and the formulations of fairness and bias defined via protected characteristics, and the second part taking a deep into peer review to explore other forms of bias such as that due to subjectivity, miscalibration, and fraud.

Overview and references (pdf)

Outline Topic Presenter Time Slides Additional references
Part 1 Fairness in sociotechnical systems Zachary Lipton 9am CST Slides
Part 2 Peer review :
  • Biases
  • Noise
  • Miscalibration
  • Dishonest behavior
  • Subjectivity
  • Norms and policies
Nihar B. Shah 10am CST