The analysis of crowdsourced data can be treated as a cognitive modeling problem, with the goal of accounting for how and why people produced the behaviors that were observed. We explore this cognitive approach in a series of examples, involving Thurstonian models of ranking, calibration models of probability estimation, and attention and similarity models of category learning. Many of the demonstrations use crowdsourced data from ranker.com. Some involve "wisdom of the crowd" predictions, while others aim to describe and explain the structure of people's opinions. Throughout the talk, we emphasize the tight interplay between theory and application, highlighting not just when existing cognitive theories and models can help address crowdsourcing problems, but also when real-world applications demand solutions to new basic research challenges in the cognitive sciences.

Michael Lee is a Professor of Cognitive Sciences at the University of California Irvine. His research focuses on modeling cognitive processes, especially of decision making, and the Bayesian implementation, evaluation, and application of those models. He has published over 150 journal and conference papers, and is the co-author of the graduate textbook "Bayesian cognitive modeling: A practical course". He is a former President of the Society for Mathematical Psychology, a winner of the William K. Estes award of that society, and a winner of the best applied paper from the Cognitive Science Society. Before moving the U.S., he worked as a senior research scientist for the Australian Defence Science and Technology Organization, and has consulted for the Australian and US DoD, as well as various universities and companies, including the crowd-sourcing platform Ranker.

Computers are now ubiquitous. However, computers and digital content have remained largely separate from the physical world – users explicitly interact with
computers through small screens and input devices, and the “virtual world” of digital content has had very little overlap with the practical, physical world. My thesis work is concerned with helping computing escape the confines of screens and devices, to spill digital content out into the physical world around us. In this way, I aim to help bridge the gap between the information-rich digital world and the familiar environment of the physical world and allow users to interact with digital content as they would ordinary physical content.

I approach this problem from many facets: from the low-level work of providing highfidelity touch interaction on everyday surfaces, easily transforming these surfaces into enormous touchscreens; to the high-level questions surrounding the interaction design between physical and virtual realms. To achieve this end, building on my prior work, I propose two physical embodiments of this new mixedreality design: a lightbulb-sized infobulb capable of projecting an interaction zone onto everyday environments, and a head-mounted augmented-reality head-mounted display modified to support touch interaction on arbitrary surfaces.

Thesis Committee:
Chris Harrison (Co-Chair)
Scott E. Hudson (Co-Chair)
Jodi Forlizzi
Hrvoje Benko (Microsoft Research)

Copy of Proposal Document

This talk focuses on how valorized forms of work become models of citizenship. Today, the halls of TED and Davos reverberate with optimism that hacking, brainstorming, and crowdsourcing can transform citizenship, development, and education alike. I will examine these claims ethnographically and historically with an eye towards the kinds of social orders these practices rely on and produce. I focuses on a hackathon, one emblematic site of social practice where techniques and work processes from information technology production become ways of remaking culture and mediated progress.

Lilly Irani is an Assistant Professor of Communication & Science Studies at the University of California, San Diego. Her work examines and intervenes in the cultural politics of high tech work. She is a co-founder and maintainer of digital labor activism tool Turkopticon. She is currently writing a book on cultural politics of innovation and development in transnational India. She has published her work at New Media & Society, South Atlantic Quarterly, and Science, Technology & Human Values, as well as at SIGCHI and CSCW. Her work has also been covered in The Nation, The Huffington Post, and NPR. Previously, she spent four years as a User Experience Designer at Google. She has a B.S. and M.S. in Computer Science, both from Stanford University and a PhD from UC Irvine in Informatics.

Faculty Host: John Zimmerman

Mobile and ubiquitous computing research has led to new techniques for cheaply, accurately, and continuously collecting data on human behavior that include detailed measurements of physical activities, social interactions and conversations, sleep quality and duration and more. Continuous and unobtrusive sensing of behaviors has tremendous potential to support the lifelong management of mental health by: (1) acting as an early warning system to detect changes in mental well-being, (2) delivering context-aware, personalized micro-interventions to patients when and where they need them, and (3) by significantly accelerating patient understanding of their illness. In this presentation, I will give an overview of our work on turning sensor-enabled mobile devices into well-being monitors and instruments for administering real-time/real-place interventions.

Tanzeem Choudhury is an associate professor in Computing and Information Sciences at Cornell University and a co-founder of HealthRhythms. At Cornell, she directs the People-Aware Computing group, which works on inventing the future of technology-assisted wellbeing. Tanzeem received her PhD from the Media Laboratory at MIT. Tanzeem was awarded the MIT Technology Review TR35 award, NSF CAREER award and a TED Fellowship. Follow the group's work on Twitter @pac_cornell.



The “P“ in AMPLab stands for "People" and an important research thrust in the lab was on integrating human processing into analytics pipelines. Starting with the CrowdDB project on human-powered query answering and continuing into the more recent SampleClean and AMPCrowd/Clamshell projects, we have been investigating ways to maximize the benefit that can be obtained through involving people in data collection, data cleaning, and query answering.  In this talk I will present an overview of these projects and discuss some future directions for hybrid cloud/crowd data-intensive applications and systems.

Michael J. Franklin is the Liew Family Chair of Computer Science and Sr. Advisor to the Provost for Computation and Data at the University of Chicago where his research focuses on database systems, data analytics, data management and distributed computing systems.  Franklin previously was the Thomas M. Siebel Professor and chair of the Computer Science Division of the EECS Department at the University of California, Berkeley.   He co-founded and directed Berkeley’s Algorithms, Machines and People Laboratory (AMPLab), which created industry-changing open source Big Data software such as Apache Spark and BDAS, the Berkeley Data Analytics Stack.   At Berkeley he also served as an executive committee member for the Berkeley Institute for Data Science.  He currently serves as a Board Member of the Computing Research Association and on the NSF CISE Advisory Committee. 

Franklin is an ACM Fellow and a two-time recipient of the ACM SIGMOD “Test of Time” award. His other honors include the Outstanding Advisor award from Berkeley’s Computer Science Graduate Student Association, and the “Best Gong Show Talk” personally awarded by Andy Pavlo at this year’s CIDR conference.

Visual Metaphors are a communication tool used to draw users' attention in print media, ads, public service announcements and art. They involve blending two symbols together visually to convey a new meaning. This is a creative problem with many solutions, but some solutions have more impact and meaning to readers than others.

I will introduce the problem of visual metaphors, and describe our early stages in crowdsourcing this problem. I will discuss how we had to adapt the design process to apply to microtasks and the lessons we have learned so far about designing media that speaks directly to reader’s low-level perceptual processing.

Lydia Chilton is an assistant professor in the Computer Science Department of Columbia University in the City of New York. Actually, she won't technically start that position until July. She is currently a post-doc working with Maneesh Agrawala at Stanford University at the intersection of graphics, HCI and crowdsourcing. She has been doing crowdsourcing for ten years and is excited to see how the original goals of crowdsourcing are being realized by a large community of talented researchers.

How might we architect interactive systems that have better models of the tasks we're trying to perform, learn over time, help refine ambiguous user intents, and scale to large or repetitive workloads? In this talk I will present Predictive Interaction, a framework for interactive systems that shifts some of the burden of specification from users to algorithms, while preserving human guidance and expressive power. The central idea is to imbue software with domain-specific models of user tasks, which in turn power predictive methods to suggest a variety of possible actions. I will illustrate these concepts with examples drawn from widely-deployed systems for data transformation and visualization (with reported order-of-magnitude productivity gains) and discuss related design considerations and future research directions.

Jeffrey Heer is an Associate Professor of Computer Science & Engineering at the University of Washington, where he directs the Interactive Data Lab and conducts research on data visualization, human-computer interaction and social computing. The visualization tools developed by his lab (D3.js, Vega, Protovis, Prefuse) are used by researchers, companies and thousands of data enthusiasts around the world. His group's research papers have received awards at the premier venues in HCI (ACM CHI, UIST, CSCW) and Information Visualization (IEEE InfoVis, VAST, EuroVis). Other awards include MIT Technology Review's TR35 (2009), a Sloan Foundation Research Fellowship (2012), and a Moore Foundation Data-Driven Discovery Investigator award (2014).

Jeff holds BS, MS and PhD degrees in Computer Science from UC Berkeley, whom he then betrayed to teach at Stanford from 2009 to 2013. Jeff is also co-founder and chief experience officer of Trifacta, a provider of interactive tools for scalable data transformation.

Faculty Host: Niki Kittur


Creating robust intelligent systems that can operate in real-world settings at super-human performance levels requires a combination of human and machine contributions. Crowdsourcing has allowed us to scale the ubiquity of these human computation systems, but the challenges in mixing human and machine effort remain a limiting factor of these systems. My lab’s work on modeling crowds as collective agents  has helped alleviate some of these challenges at a system level, but how we can create cohesive ecosystems of crowd-powered tools that together solve more complex and diverse needs remains an open question. In this talk, I will discuss some initial and ongoing work that aims to create complex crowdsourcing systems for applications that cannot be solved using only a single tool.

Walter S. Lasecki is an Assistant Professor of Computer Science and Engineering at the University of Michigan, Ann Arbor, where he directs the Crowds+Machines (CROMA) Lab. He and his students create interactive intelligent systems that are robust enough to be used in real-world settings by combining both human and machine intelligence to exceed the capabilities of either. These systems let people be more productive, and improve access to the world for people with disabilities. Dr. Lasecki received his Ph.D and M.S. from the University of Rochester in 2015 and a B.S. in Computer Science and Mathematics from Virginia Tech in 2010. He has previously held visiting research positions at CMU, Stanford, Microsoft Research, and Google[x].

Much of HCI research involves asking people questions, either through interviews, surveys, design sessions, evaluation studies, voting, polling and so on. We choose our methods depending on what we want to find out. However, there is also increasing evidence showing how the use of different media and form factors can affect how much people are willing to share, what they say and how honest they are. For example, studies have shown how people reveal more about their habits when filling in an online form compared with a paper-based one; students have been found to rate their instructors less favorably when online; people may divulge more when interacting with bots or robots compared with talking to people.

In my talk, I will describe a program of research we have been conducting over the last few years, where we have been investigating how physicality and embodied interaction can be used to good effect, widening participation, encouraging reflection and helping scientists make sense of data. At the same time, we have been using breaching experiments, design fiction and artistic probes to elicit responses and reactions for more edgy and elusive topics. In so doing, we have been subjected to considerable criticism – arguing that we are irresponsible and have gone too far in our methods when using technology. While it is easy to play safe and hide behind IRB walls, I will argue that it is imperative that we take more risks in our research if we want to answer the difficult questions about how technology design affects people’s lives.

Professor Yvonne Rogers is the director of the Interaction Centre at UCL (UCLIC), and a deputy head of the Computer Science department at UCL. She is the Principal Investigator for the Intel-funded Urban IoT collaborative research Institute at UCL. Her research interests lie at the intersection of physical computing, interaction design and human-computer interaction. Much of her work is situated in the wild - concerned with informing, building and evaluating novel user experiences through creating and assembling a diversity of technologies (e.g. tangibles, internet of things) that augment everyday, learning, community engagement and collaborative work activities. She has been instrumental in promulgating new theories (e.g., external cognition), alternative methodologies (e.g., in the wild studies) and far-reaching research agendas (e.g., “Being Human: HCI in 2020” manifesto), and has pioneered an approach to innovation and ubiquitous learning. She is a co-author of the definitive textbook on Interaction Design and HCI now published in its 4th edition that has sold over 150,000 copies worldwide and has been translated into 6 languages. She is a fellow of the BCS and the ACM CHI Academy.



Subscribe to HCII