I'm Emily and I'm a fourth year PhD student in the Accountable Systems Lab at Carnegie Mellon University, advised by Matt Fredrikson. My research centers around understanding the impacts of machine learning and deep learning models in society. In particular, my research focuses on showing ways in which commonly used machine learning models may act unfairly; finding ways to pinpoint when models are behaving in a harmful manner in practice; and developing ways to mitigate harmful behavior when possible. Currently, I am especially interested in the intersection between fairness, model stability, and procedural justice.

GHC 7004 | emilybla cs. cmu. edu | emblack

Research Interests

Increasingly, rules governing societal decisions are not just made in government but at tech startups and in academia--anywhere where predictive algorithms are created and reasoned about. Predictive algorithms now make decisions ranging from whether or not a an applicant gets a loan to whether a person is forced to go to jail or allowed to pay bail. It has been shown over the past few years that these algorithms do not always work in an equitable fashion, and there are more and more headlines as algorithms are used in more and more sensitive places: for example, racially biased healthcare distribution systems, image recognition systems, and predictive policing programs. For the most part, this is not a problem only of the models--they act as a mirror to the biases already present in our world.

My research concentrates on finding ways to identify when a model is acting unfairly, curtailing this behavior when possible, and demonstrating that in some situations it is not possible. This research can take different shapes: it often concerns transparency and explainability in AI, since in order to understand how a model is unfair, we need to understand how it works in the first place. It also can consist of bringing attention to unexplored ways in which models can behave unfairly--for example, ways in which model instability can lead to unfairness. I also work on more practical methods of identifying discrimination, such as developing auditing techniques for machine learning models. Finally, I am also interested in connecting my theoretical work with practices surrounding the use machine learning systems.

Papers

Algorithmic Fairness and Vertical Equity: Income Fairness in Tax Audits [FAccT 2022]

This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the United States Internal Revenue Service (IRS). While the field of algorithmic fairness has developed primarily around notions of treating like individuals alike, we instead explore the concept of vertical equity---appropriately accounting for relevant differences across individuals---which is a central component of fairness in many public policy settings. Applied to the design of the U.S. individual income tax system, vertical equity relates to the fair allocation of tax and enforcement burdens across taxpayers of different income levels. Through a unique collaboration with the Treasury Department and IRS, we use access to detailed, anonymized individual taxpayer microdata, risk-selected audits, and random audits from 2010-14 to study vertical equity in tax administration. In particular, we assess how the adoption of modern machine learning methods for selecting taxpayer audits may affect vertical equity. Our paper makes four contributions. First, we show how the adoption of more flexible machine learning (classification) methods---as opposed to simpler models ---shapes vertical equity by shifting audit burdens from high to middle-income taxpayers. Second, given concerns about high audit rates of low-income taxpayers, we investigate how existing algorithmic fairness techniques would change the audit distribution. We find that such methods can mitigate some disparities across income buckets, but that these come at a steep cost to performance. Third, we show that the choice of whether to treat risk of underreporting as a classification or regression problem is highly consequential. Moving from a classification approach to a regression approach to predict the expected magnitude of underreporting shifts the audit burden substantially toward high income individuals, while increasing revenue. Last, we investigate the role of differential audit cost in shaping the distribution of audits. Audits of lower income taxpayers, for instance, are typically conducted by mail and hence pose much lower cost to the IRS. We show that a narrow focus on return-on-investment can undermine vertical equity. Our results have implications for ongoing policy debates and the design of algorithmic tools across the public sector.

Model Multiplicity: Opportunities, Concerns, Solutions [FAccT 2022]

Recent scholarship has brought attention to the fact that there often exist multiple models for a given prediction task with equal accuracy that differ in their individual-level predictions or aggregate properties. This phenomenon---which we call model multiplicity---can introduce a good deal of flexibility into the model selection process, creating a range of exciting opportunities. By demonstrating that there are many different ways of making equally accurate predictions, multiplicity gives practitioners the freedom to prioritize other values in their model selection process without having to abandon their commitment to maximizing accuracy. However, multiplicity also brings to light a concerning truth: model selection on the basis of accuracy alone---the default procedure in many deployment scenarios---fails to consider what might be meaningful differences between equally accurate models with respect to other criteria such as fairness, robustness, and interpretability. Unless these criteria are taken into account explicitly, developers might end up making unnecessary trade-offs or could even mask intentional discrimination. Furthermore, the prospect that there might exist another model of equal accuracy that flips a prediction for a particular individual may lead to a crisis in justifiability: why should an individual be subject to an adverse model outcome if there exists an equally accurate model that treats them more favorably? In this work, we investigate how to take advantage of the flexibility afforded by model multiplicity while addressing the concerns with justifiability that it might raise?

Selective Ensembles for Consistent Predictions [ICLR 2022]

Recent work has shown that models trained to the same objective, and which achieve similar measures of accuracy on consistent test data, may nonetheless behave very differently on individual predictions. This inconsistency is undesirable in high-stakes contexts, such as medical diagnosis and finance. We show that this inconsistent behavior extends beyond predictions to feature attributions, which may likewise have negative implications for the intelligibility of a model, and one's ability to find recourse for subjects. We then introduce selective ensembles to mitigate such inconsistencies by applying hypothesis testing to the predictions of a set of models trained using randomly-selected starting conditions; importantly, selective ensembles can abstain in cases where a consistent outcome cannot be achieved up to a specified confidence level. We prove that that prediction disagreement between selective ensembles is bounded, and empirically demonstrate that selective ensembles achieve consistent predictions and feature attributions while maintaining low abstention rates. On several benchmark datasets, selective ensembles reach zero inconsistently predicted points, with abstention rates as low 1.5%.

Consistent Counterfactuals for Deep Models [ICLR 2022]

Counterfactual examples are one of the most commonly-cited methods for explaining the predictions of machine learning models in key areas such as finance and medical diagnosis. Counterfactuals are often discussed under the assumption that the model on which they will be used is static, but in deployment models may be periodically retrained or fine-tuned. This paper studies the consistency of model prediction on counterfactual examples in deep networks under small changes to initial training conditions, such as weight initialization and leave-one-out variations in data, as often occurs during model deployment. We demonstrate experimentally that counterfactual examples for deep models are often inconsistent across such small changes, and that increasing the cost of the counterfactual, a stability-enhancing mitigation suggested by prior work in the context of simpler models, is not a reliable heuristic in deep networks. Rather, our analysis shows that a model's local Lipschitz continuity around the counterfactual is key to its consistency across related models. To this end, we propose Stable Neighbor Search as a way to generate more consistent counterfactual explanations, and illustrate the effectiveness of this approach on several benchmark datasets.

Leave-one-out Unfairness [FAccT* 2021]

We introduce leave-one-out unfairness, which characterizes how likely a model’s prediction for an individual will change due to the inclusion or removal of a single other person in the model’s training data. Leave-one-out unfairness appeals to the idea that fair decisions are not arbitrary: they should not be based on the chance event of any one person’s inclusion in the training data. Leave-one-out unfairness is closely related to algorithmic stability, but it focuses on the consistency of an individual point’s prediction outcome over unit changes to the training data, rather than the error of the model in aggregate. Beyond formalizing leave-one-out unfairness, we characterize the extent to which deep models behave leave-one-out unfairly on real data, including in cases where the generalization error is small. Further, we demonstrate that adversarial training and randomized smoothing techniques have opposite effects on leave-one-out fairness, which sheds light on the relationships between robustness, memorization, individual fairness, and leave-one-out fairness in deep models. Finally, we discuss salient practical applications that may be negatively affected by leave-one-out unfairness.

FlipTest: Fairness Testing Via Optimal Transport [FAT* 2020] (Adopted into Amazon's Sagemaker Clarify!)

We present FlipTest, a black-box technique for uncovering discrimination in classifiers. FlipTest is motivated by the intuitive question: had an individual been of a different protected status, would the model have treated them differently? Rather than relying on causal information to answer this question, FlipTest leverages optimal transport to match individuals in different protected groups, creating similar pairs of in-distribution samples. We show how to use these instances to detect discrimination by constructing a flipset: the set of individuals whose classifier output changes post-translation, which corresponds to the set of people who may be harmed because of their group membership. To shed light on why the model treats a given subgroup differently, FlipTest produces a transparency report: a ranking of features that are most associated with the model’s behavior on the flipset. Evaluating the approach on three case studies, we show that this provides a computationally inexpensive way to identify subgroups that may be harmed by model discrimination, including in cases where the model satisfies group fairness criteria.

Feature-wise Bias Amplification [ICLR 2019]

We study the phenomenon of bias amplification in classifiers, wherein a machine learning model learns to predict classes with a greater disparity than the underlying ground truth. We demonstrate that bias amplification can arise via an inductive bias in gradient descent methods that results in the overestimation of the importance of moderately-predictive "weak" features if insufficient training data is available. This overestimation gives rise to feature-wise bias amplification — a previously unreported form of bias that can be traced back to the features of a trained model. Through analysis and experiments, we show that while some bias cannot be mitigated without sacrificing accuracy, feature-wise bias amplification can be mitigated through targeted feature selection. We present two new feature selection algorithms for mitigating bias amplification in linear models, and show how they can be adapted to convolutional neural networks efficiently. Our experiments on synthetic and real data demonstrate that these algorithms consistently lead to reduced bias without harming accuracy, in some cases eliminating predictive bias altogether while providing modest gains in accuracy.

Evaluating Facial Recognition Technology: A Protocol for Performance Assessment in New Domains [To appear in: 98 Denv. L. Rev. 753 (2021)]

Framework legislation concerning the regulation of FRT has included general calls for evaluation. In this paper, we provide guidance for how to actually implement and realize it. First, we bring attention to the likely possibility that facial recognition systems do not have the accuracy circulated in their marketing materials when used in real-world deployment scenarios. We propose a framework for evaluating facial recognition accuracy in deployment, invovling various stakeholders---FRT vendors, FRT users, policymakers, journalists, and civil society organizations — to promote a more reliable understanding of FRT performance.

A Call for Universities to Develop Requirements for Community Engagement in AI Research [Fair & Responsible AI Workshop @ CHI2020]

We call for universities to develop and implement requirements for community engagement in AI research. We propose that universities create these requirements so that: (1) university-based AI researchers will be incentivized to incorporate meaningful community engagement throughout the research lifecycle, (2) the resulting research is more effective at serving the needs and interests of impacted communities, not simply the stakeholders with greater influence, and (3) the AI field values the process and challenge of community engagement as an important contribution in its own right.

  • Carnegie Mellon University

  • School of Computer Science

  • Computer Science Department

  • Accountable Systems Lab