Traditional notions of algorithmic fairness for machine learning typically demand approximate parity of some statistical property across a small number of pre-defined groups, such as equality of false negative rates across racial groups. In this talk we describe learning algorithms meeting fairness guarantees that are stronger in various ways, including fairness at the individual level in sequential decision-making, and fairness across a large or infinite class of structured subgroups. Algorithms meeting this latter notion avoid what we call “fairness gerrymandering”, and can be cast as a two-person, zero-sum game between a Learner and Auditor with provable convergence guarantees and promising empirical performance.

About the Speaker