The 2022 FAI PI Meeting
Introduction: This 2-day virtual event convenes Fair-AI researchers whose work is supported by the NSF Program on Fairness in AI in Collaboration with Amazon (FAI), as well as prominent players in the responsible AI space from government, industry, and civil society to discuss basic research and advances and roadblocks to developing best practices and standards in Fair-AI research and practice.
Event dates and duration: July 11-12, 2022, from 11 AM to 5 PM ET.
Event venues:
Keynotes:
AI systems sometimes do not operate as intended because they are making inferences from patterns observed in data rather than a true understanding of what causes those patterns. Ensuring that these inferences are helpful and not harmful in particular use cases – especially when inferences are rapidly scaled and amplified – is fundamental to trustworthy AI. While answers to the question of what makes an AI technology trustworthy differ, there are certain key characteristics which support trustworthiness, including accuracy, explainability and interpretability, privacy, reliability, robustness, safety, security (resilience) and mitigation of harmful bias. There also are key guiding principles to take into account such as accountability, fairness, and equity. Cultivating trust and communication about how to understand and manage the risks of AI systems will help create opportunities for innovation and realize the full potential of this technology.
This presentation overviews NIST’s effort in developing a framework to better manage risks to individuals, organizations, and society associated with AI. The NIST Artificial Intelligence Risk Management Framework (AI RMF or Framework) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Algorithmic (data-driven) nudges are increasingly being used to help human decision makers on a variety of match-/market- making platforms from gig-economy and e-commerce platforms to online conferencing and charity platforms. A defining feature of these platforms is that they are multi-sided, i.e., the algorithms mediate interactions between different types of stakeholders. Many existing notions of individual and group fairness in algorithmic fairness literature have largely been inspired by predictive risk assessment contexts (e.g., credit or recidivism risk assessments) that are often single sided. As such, these notions are insufficient to address novel unfairness and bias considerations that arise when the mediating algorithm can trade-off beneficial outcomes for one side with those of the other side(s). In this talk, I will discuss the challenges we faced in our attempts to define and operationalize unfairness and bias notions in the context of a large e-commerce platform.
Schedule: Each day will last 6 hours (5 hours content + 1 hour break). The schedule for each day is as follows:
Day 1—Theme: Tools, Best-practices, and Standards for AI in High-stakes Domains | |||||
Time (ET) | Activity | Duration | Details | Speakers | Venue |
11:00 AM – 11:20 AM | Opening Remarks | 15 min + 5 min buffer | Hoda Heidari (3 min): Welcome, logistics, outline of the event Margaret Martonosi–CISE Assistant Director (6 min): an overview of NSF/Amazon Partnership Prem Natarajan (6 min): an overview of Amazon’s work on Fairness | Hoda Heidari (CMU) Margaret Martonosi (NSF) Prem Natarajan (Amazon) | Zoom |
11:20 AM – 12:00 AM | Keynote speech | 30 min + 10 min Q&A | NIST AI Risk Management Framework | Elham Tabassi (NIST) | Zoom |
15-min Break | |||||
12:15 AM – 1:15 PM | Lightning talks + joint Q&A session Theme: Application- focused Fair-AI | 7 min | Foundations of Fair AI in Medicine: Ensuring the Fair Use of Patient Attributes | Flavio Calmon | Zoom |
7 min | Quantifying and Mitigating Disparities in Language Technologies | Graham Neubig | Zoom | ||
7 min | Measuring and Mitigating Biases in Generic Image Representation | Vicente Ordonez | Zoom | ||
7 min | Towards Holistic Bias Mitigation in Computer Vision Systems | Nuno Vasconselos | Zoom | ||
7 min | Using Machine Learning to Address Structural Bias in Personnel Selection | Nan Zhang | Zoom | ||
7 min | Bias measurement and mitigation at industrial scale | Rahul Gupta (Amazon) | Zoom | ||
15-min Break | |||||
1:30 PM – 2:15 PM | Poster and networking session | 45 min | Poster presentations + networking | All participants | Gather |
1-hour Break | |||||
3:15 PM – 4:00 PM | Expert panel | 45 min | Panel discussion focused on roadblocks to developing, implementing and enforcing AI best practices, standards, and regulations | Maksim Karliuk (UNESCO) Joshua P. Meltzer (Brookings) Baobao Zhang (Syracuse University) | Zoom |
15-min Break | |||||
4:15 PM – 4:45 PM | Breakout discussions | 30 min | Discussions in smaller groups to develop a Fair-AI best-practices wishlist | All participants | Zoom |
4:45 PM – 5:00 PM | Breakout summary reports | 15 min | Groups to report back on their breakout discussions | Breakout group moderators | Zoom |
Day 2—Theme: Effective Engagement with Stakeholders and Impacted Communities toward Fairer AI | |||||
Time (ET) | Activity | Duration | Details | Speakers | Venue |
11:00 AM – 11:05 AM | Opening Remarks | 5 min | Hoda Heidari (2 min): outline of the day | Hoda Heidar (CMU) | Zoom |
11:05 AM – 11:45 AM | Keynote speech | 30 min + 10 min Q&A | Unfairness and Bias in Multi-sided / Multistakeholder Online Platforms | Krishna Gummadi (MPI-SWS) | Zoom |
15-min break/buffer | |||||
12:00 PM – 1:00 PM | Lightning talks + joint Q&A session Theme: human-centered Fair-AI | 7 min | Understanding and Mitigating the Biases of Face Detectors | Prateek Singhal (Amazon) | Zoom |
7 min | Using AI to Increase Fairness by Improving Access to Justice | Kevin Ashley | Zoom | ||
7 min | Fair AI in Public Policy - Achieving Fair Societal Outcomes in ML Applications to Education, Criminal Justice, and Health & Human Services | Hoda Heidari | Zoom | ||
7 min | Organizing Crowd Audits to Detect Bias in Machine Learning | Jason Hong Motahhare Eslami | Zoom | ||
7 min | Fairness in Machine Learning with Human in the Loop | Yang Liu | Zoom | ||
7 min | End-To-End Fairness for Algorithm-in-the-Loop Decision - Making in the Public Sector | Daniel Neill | Zoom | ||
7 min | Towards Adaptive and Interactive Post hoc Explanations | Chenhao Tan | Zoom | ||
15-min break | |||||
1:15 PM – 2:00 PM | Poster and networking session | 45 min | Poster presentations + meet and greet | All participants | Gather |
1-hour break | |||||
3:00 PM – 3:45 PM | Expert panel | 45 min | Panel discussion focused on improving the policy and practical impact of Fair-AI | Maria De-Arteaga (UT Austin) Alex Engler (Brookings) Sorelle Friedler (OSTP) David Robinson (UC Berkeley) | Zoom |
15-min break/buffer | |||||
4:00 PM – 4:30 PM | Breakout discussions | 30 min | Discussions in smaller groups to develop a list of desiderata for effective engagement with stakeholders and affected communities | All participants | Zoom |
4:30 PM – 4:45 PM | Breakout summary reports | 15 min | Groups to report back on their breakout discussions | Breakout group moderators | Zoom |
4:45 PM – 5:00 PM | Concluding remarks and next steps | Up to 15 min | Summarizing the takeaways of the event and next steps to produce a white paper | Hoda Heidari | Zoom |
Contact: Please feel free to send an email to Hoda Heidari for any questions or comments about this event.