I'm Emily and I'm a second year PhD student in the Accountable Systems Lab at Carnegie Mellon University, advised by Matt Fredrikson. My research centers around making machine learning and deep learning systems fairer and more transparent. Currently, I am especially interested in understanding all the different ways in which models can be unfair, and finding reliable ways to audit models for these behaviors.

GHC 7004 | emilybla cs. cmu. edu | emblack

Research Interests

Increasingly, rules governing societal decisions are not just made in government but at tech startups and in academia--anywhere where predictive algorithms are created and reasoned about. Predictive algorithms have infiltrated our society, unknowingly to most, and make decisions ranging from whether or not a an applicant gets a loan to whether a person is forced to go to jail or allowed to pay bail. It has been shown over the past few years that these algorithms do not always work in an equitable fashion, and there are more and more headlines as algorithms are used in more and more sensitive places: Amazon's gender-biased hiring AI, racially biased predictive policing programs, and an algorithm to predict child neglect that oversamples the poor. For the most part, this is not a problem only of the models--they act as a mirror to the biases already present in our world.

My research concentrates on understanding how we can identify when a model is acting unfairly. This research can take different shapes: it often runs into transparency and explainability in AI, since in order to understand how a model is unfair, we need to understand how it works in the first place. I also work on more practical methods of identifying discrimination, such as developing auditing techniques for machine learning models.


FlipTest: Fairness Testing Via Optimal Transport [FAT* 2020]

We present FlipTest, a black-box technique for uncovering discrimination in classifiers. FlipTest is motivated by the intuitive question: had an individual been of a different protected status, would the model have treated them differently? Rather than relying on causal information to answer this question, FlipTest leverages optimal transport to match individuals in different protected groups, creating similar pairs of in-distribution samples. We show how to use these instances to detect discrimination by constructing a flipset: the set of individuals whose classifier output changes post-translation, which corresponds to the set of people who may be harmed because of their group membership. To shed light on why the model treats a given subgroup differently, FlipTest produces a transparency report: a ranking of features that are most associated with the model’s behavior on the flipset. Evaluating the approach on three case studies, we show that this provides a computationally inexpensive way to identify subgroups that may be harmed by model discrimination, including in cases where the model satisfies group fairness criteria.

Feature-wise Bias Amplification [ICLR 2019]

We study the phenomenon of bias amplification in classifiers, wherein a machine learning model learns to predict classes with a greater disparity than the underlying ground truth. We demonstrate that bias amplification can arise via an inductive bias in gradient descent methods that results in the overestimation of the importance of moderately-predictive "weak" features if insufficient training data is available. This overestimation gives rise to feature-wise bias amplification — a previously unreported form of bias that can be traced back to the features of a trained model. Through analysis and experiments, we show that while some bias cannot be mitigated without sacrificing accuracy, feature-wise bias amplification can be mitigated through targeted feature selection. We present two new feature selection algorithms for mitigating bias amplification in linear models, and show how they can be adapted to convolutional neural networks efficiently. Our experiments on synthetic and real data demonstrate that these algorithms consistently lead to reduced bias without harming accuracy, in some cases eliminating predictive bias altogether while providing modest gains in accuracy.

A Call for Universities to Develop Requirements for Community Engagement in AI Research [Fair & Responsible AI Workshop @ CHI2020]

We call for universities to develop and implement requirements for community engagement in AI research. We propose that universities create these requirements so that: (1) university-based AI researchers will be incentivized to incorporate meaningful community engagement throughout the research lifecycle, (2) the resulting research is more effective at serving the needs and interests of impacted communities, not simply the stakeholders with greater influence, and (3) the AI field values the process and challenge of community engagement as an important contribution in its own right.

  • Carnegie Mellon University

  • School of Computer Science

  • Computer Science Department

  • Accountable Systems Lab