Stacy KishTuesday, January 16, 2024Print this page.
In the 2010s, Americans invited new friends into their homes: Siri and Alexa. These personal intelligent agents use algorithms to adapt to users' preferences and exhibit humanlike characteristics to ease integration into daily life. While Siri and Alexa were at the leading edge of the artificial intelligence revolution, the evolution of this technology raises big questions about how AI will actually benefit society.
Carnegie Mellon University faculty members Hoda Heidari and Alex John London teamed up to model what sounds like simple questions — what does it take for an AI system to benefit a user and what are the moral pitfalls that can occur when these conditions for benefit are not met? In the study, London and Heidari present a framework to build technology that is beneficent — "doing good"— by design and puts the human at the center, helping a person live a life that expresses their considered goals and values.
"This is a deep, humanistic paradox," said London, the K&L Gates Professor of Ethics and Computational Technologies in the Dietrich College of Humanities and Social Sciences' Department of Philosophy. "Everyone wants the good life but there is no single recipe for a good life, and people have broad disagreements and variation in how they spend their time."
According to London, without a good sense of what the "human good" actually is, developers often adopt a proxy to make AI systems work. But in some cases, a system can satisfy these proxy conditions while providing trivial benefits or harming users. Determining who benefits from the new system complicates matters. The concern today is that AI systems will benefit companies and generate profits but not help individual users or society at large.
"The fact that people have different conceptions of the 'good life' is precisely what our work draws attention to," said Heidari, the K&L Gates Career Development Professor in Ethics and Computational Technologies in the School of Computer Science. "We encourage developers and creators of AI to understand the values and life plans of their targeted users and those potentially impacted by their AI products — starting from the early stages of design of new AI systems."
In their work, London and Heidari shift the focus from making AI products that advance a company's goals to advancing the goals of the individual. Using this approach, they have formalized a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit and avoid the pitfalls of deception, paternalism, coercion, exploitation and domination.
Their work draws attention to how the principles of fairness, accountability, transparency, safety and reliability, security, and privacy become the focus after the AI system has been built. They contend that other principles, such as beneficence, must be front and center in earlier stages of ideation and design.
Moving forward, London and Heidari will examine how people who develop AI algorithms include these ethical considerations into future designs. This new approach to AI could benefit individuals and larger groups and connect justice and fairness to design. They plan to run workshops to disseminate the framework, apply it to ongoing AI projects, and measure the efficacy of the approach to address ethical issues.
Their findings are published on Arxiv, a preprint server for articles published prior to peer review.
Read more on the Dietrich College News website.
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu