Human-Computer Interaction Seminar
- Newell-Simon Hall
- ADRIAN GROPPER, M.D.
- Chief Technology Officer
- Patient Privacy Rights Foundation
The case for self-sovereign personal AI
Overview: Adrian has developed a technical architecture to manage personal privacy that keeps control with the user and away from the tech platforms or corporations. It can work across healthcare, IoT, recommendation engines, social media, etc. He goes after the prominent question in Zuboff’s recent book on "surveillance capitalism", “Who decides who decides?”, in regard to data usage. The AI component of the architecture must be understood by designers in terms of how and why the AI will communicate with the user, who then delegates permissions to the AI. It’s an important example of AI as material and the responsibilities of designers in relation to it.
Full Abstract: Whether it’s a smartphone that filters notifications or a brain implant that manages a neurological problem, connected personal technology tests the definition and limits of “self.” Our human identity is a combination of attributes managed by ourselves and attributes that relate to us but are managed by others, often without our knowledge or informed consent.
Today, there is no practical way to evaluate requests for use of our personal data, nor clear solution for designers who wish to incorporate privacy management at the core of their products and services. At a time when platforms such as Facebook are under scrutiny for how they use personal data, a self-sovereign agent can begin to rectify the vast asymmetry of power between the platform and the data subject.
Self-sovereign agent technology has a fiduciary relationship with the human subject that mirrors the relationship a physician has with a patient. Identity that is exposed by a self-sovereign agent is designed to minimize the leakage of personal information as individuals engage with other people and entities, both public and private. However, the cost of managing personal information “manually” for each of the dozens and hundreds of entities we encounter every week is unsustainable. Therefore, individuals need a way to automate the vast majority of decisions that are directly linked to their identity, inviting the integration of AI as a decision agent into the design of human-computer systems. Machine learning could balance the social intent with the privacy of interactions that we typically reserve for our real-time, conscious selves in a conversation.
It could be argued that privacy is now a limiting factor in computer science. This talk is about the interaction between self-sovereign identity and self-sovereign technology standards using examples from health records and “connected home” IoT devices.
Adrian Gropper, MD is CTO of the non-profit Patient Privacy Rights Foundation where he brings training as an engineer from MIT and physician from Harvard Medical School followed by a career as a medical device entrepreneur. He founded three regulated medical diagnostics businesses including AMICAS, the first Web-based radiology image network and the first to provide imaging links in electronic health records. He participated in the founding of many healthcare interoperability initiatives including Blue Button, Direct Project, and Health Relationship Trust (HEART) and he speaks frequently on privacy engineering in health care. His paper won a prize at ONC’s 2016 Blockchain Health competition. His current project, Trustee by HIE of One (Health Information Exchange of One), uses public blockchains, standards, and free software to enable patient-controlled independent health records that can span a lifetime. This reference implementation informs emerging blockchain standards development for identity, credentials, and reputation with groups that include W3C, IEEE, Kantara, OpenID Foundation, and others.
Faculty Host: Paul Pangaro