Privacy Seminar

  • SOLON BAROCAS
  • Principal Researcher
  • Microsoft Research, New York City, and
  • Assistant Professor, Department of INformation Science, Cornell University
Seminars

Privacy Dependencies

This Article offers a comprehensive survey of privacy dependencies—the many ways that our privacy depends on the decisions and disclosures of other people. What we do and what we say can reveal as much about others as it does about ourselves, even when we don’t realize it or when we think we’re sharing information about ourselves alone.

We identify three bases upon which our privacy can depend: our social ties, our similarities to others, and our differences from others. In a tie-based dependency, an observer learns about one person by virtue of her social relationships with others—family, friends, or other associates. In a similarity-based dependency, inferences about our unrevealed attributes are drawn from our similarities to others for whom that attribute is known. And in difference-based dependencies, revelations about ourselves demonstrate how we are different from others—by showing, for example, how we “break the mold” of normal behavior or establishing how we rank relative to others with respect to some desirable attribute.

We elaborate how these dependencies operate, isolating the relevant mechanisms and providing concrete examples of each mechanism in practice, the values they implicate, and the legal and technical interventions that may be brought to bear on them. Our work adds to a growing chorus demonstrating that privacy is neither an individual choice nor an individual value—but it is the first to systematically demonstrate how different types of dependencies can raise very different normative concerns, implicate different areas of law, and create different challenges for regulation.

Solon Barocas is a Principal Researcher in the New York City lab of Microsoft Research and an Assistant Professor in the Department of Information Science at Cornell University. He is also a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. His research explores ethical and policy issues in artificial intelligence, particularly fairness in machine learning, methods for bringing accountability to automated decision-making, and the privacy implications of inference. He co-founded the annual workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) and later established the ACM conference on Fairness, Accountability, and Transparency (FAT*). Previously, he was a Postdoctoral Researcher at Microsoft Research and the Center for Information Technology Policy at Princeton University. He completed his doctorate at New York University.

For More Information, Please Contact: