HCII

How might we architect interactive systems that have better models of the tasks we're trying to perform, learn over time, help refine ambiguous user intents, and scale to large or repetitive workloads? In this talk I will present Predictive Interaction, a framework for interactive systems that shifts some of the burden of specification from users to algorithms, while preserving human guidance and expressive power. The central idea is to imbue software with domain-specific models of user tasks, which in turn power predictive methods to suggest a variety of possible actions. I will illustrate these concepts with examples drawn from widely-deployed systems for data transformation and visualization (with reported order-of-magnitude productivity gains) and discuss related design considerations and future research directions.

Jeffrey Heer is an Associate Professor of Computer Science & Engineering at the University of Washington, where he directs the Interactive Data Lab and conducts research on data visualization, human-computer interaction and social computing. The visualization tools developed by his lab (D3.js, Vega, Protovis, Prefuse) are used by researchers, companies and thousands of data enthusiasts around the world. His group's research papers have received awards at the premier venues in HCI (ACM CHI, UIST, CSCW) and Information Visualization (IEEE InfoVis, VAST, EuroVis). Other awards include MIT Technology Review's TR35 (2009), a Sloan Foundation Research Fellowship (2012), and a Moore Foundation Data-Driven Discovery Investigator award (2014).

Jeff holds BS, MS and PhD degrees in Computer Science from UC Berkeley, whom he then betrayed to teach at Stanford from 2009 to 2013. Jeff is also co-founder and chief experience officer of Trifacta, a provider of interactive tools for scalable data transformation.

Faculty Host: Niki Kittur

Livestream

Creating robust intelligent systems that can operate in real-world settings at super-human performance levels requires a combination of human and machine contributions. Crowdsourcing has allowed us to scale the ubiquity of these human computation systems, but the challenges in mixing human and machine effort remain a limiting factor of these systems. My lab’s work on modeling crowds as collective agents  has helped alleviate some of these challenges at a system level, but how we can create cohesive ecosystems of crowd-powered tools that together solve more complex and diverse needs remains an open question. In this talk, I will discuss some initial and ongoing work that aims to create complex crowdsourcing systems for applications that cannot be solved using only a single tool.

Walter S. Lasecki is an Assistant Professor of Computer Science and Engineering at the University of Michigan, Ann Arbor, where he directs the Crowds+Machines (CROMA) Lab. He and his students create interactive intelligent systems that are robust enough to be used in real-world settings by combining both human and machine intelligence to exceed the capabilities of either. These systems let people be more productive, and improve access to the world for people with disabilities. Dr. Lasecki received his Ph.D and M.S. from the University of Rochester in 2015 and a B.S. in Computer Science and Mathematics from Virginia Tech in 2010. He has previously held visiting research positions at CMU, Stanford, Microsoft Research, and Google[x].

Much of HCI research involves asking people questions, either through interviews, surveys, design sessions, evaluation studies, voting, polling and so on. We choose our methods depending on what we want to find out. However, there is also increasing evidence showing how the use of different media and form factors can affect how much people are willing to share, what they say and how honest they are. For example, studies have shown how people reveal more about their habits when filling in an online form compared with a paper-based one; students have been found to rate their instructors less favorably when online; people may divulge more when interacting with bots or robots compared with talking to people.

In my talk, I will describe a program of research we have been conducting over the last few years, where we have been investigating how physicality and embodied interaction can be used to good effect, widening participation, encouraging reflection and helping scientists make sense of data. At the same time, we have been using breaching experiments, design fiction and artistic probes to elicit responses and reactions for more edgy and elusive topics. In so doing, we have been subjected to considerable criticism – arguing that we are irresponsible and have gone too far in our methods when using technology. While it is easy to play safe and hide behind IRB walls, I will argue that it is imperative that we take more risks in our research if we want to answer the difficult questions about how technology design affects people’s lives.

Professor Yvonne Rogers is the director of the Interaction Centre at UCL (UCLIC), and a deputy head of the Computer Science department at UCL. She is the Principal Investigator for the Intel-funded Urban IoT collaborative research Institute at UCL. Her research interests lie at the intersection of physical computing, interaction design and human-computer interaction. Much of her work is situated in the wild - concerned with informing, building and evaluating novel user experiences through creating and assembling a diversity of technologies (e.g. tangibles, internet of things) that augment everyday, learning, community engagement and collaborative work activities. She has been instrumental in promulgating new theories (e.g., external cognition), alternative methodologies (e.g., in the wild studies) and far-reaching research agendas (e.g., “Being Human: HCI in 2020” manifesto), and has pioneered an approach to innovation and ubiquitous learning. She is a co-author of the definitive textbook on Interaction Design and HCI now published in its 4th edition that has sold over 150,000 copies worldwide and has been translated into 6 languages. She is a fellow of the BCS and the ACM CHI Academy.

Livestream

As digital interaction spreads to an increasing number of devices, direct physical manipulation has become the dominant metaphor in HCI. The promise made by this approach is that digital content will look, feel, and respond like content from the real world. Current commercial systems fail to keep that promise, leaving a broad gulf between what users are led to expect and what they see and feel. In this talk, Daniel will discuss two areas where his lab has been making strides to address this gap. First, in the area of passive haptics, he will describe technologies intended to enable users to feel virtual content, without having to wear gloves or hold “poking” devices. Second, in the area of systems performance, he will describe his team’s work in achieving nearly zero latency responses to touch and stylus input.

Daniel Wigdor is an associate professor of computer science and co-director of the Dynamic Graphics Project at the University of Toronto. His research is in the area of human-computer interaction, with major areas of focus in the architecture of highly-performant UI’s, on development methods for ubiquitous computing, and on post-WIMP interaction methods. Before joining the faculty at U of T in 2011, Daniel was a researcher at Microsoft Research, the user experience architect of the Microsoft Surface Table, and a company-wide expert in user interfaces for new technologies. Simultaneously, he served as an affiliate assistant professor in both the Department of Computer Science & Engineering and the Information School at the University of Washington. Prior to 2008, he was a fellow at the Initiative in Innovative Computing at Harvard University, and conducted research as part of the DiamondSpace project at Mitsubishi Electric Research Labs. He is co-founder of Iota Wireless, a startup dedicated to the commercialization of his research in mobile-phone gestural interaction, and of Tactual Labs, a startup dedicated to the commercialization of his research in high-performance, low-latency user input.

For his research, he has been awarded an Ontario Early Researcher Award (2014) and the Alfred P. Sloan Foundation’s Research Fellowship (2015), as well as best paper awards or honorable mentions at CHI 2016, CHI 2015, CHI 2014, Graphics Interface 2013, CHI 2011, and UIST 2004. Three of his projects were selected as the People’s Choice Best Talks at CHI 2014 and CHI 2015.

Faculty Host: Chris Harrison

Daniel is the co-author of Brave NUI World | Designing Natural User Interfaces for Touch and Gesture, the first practical book for the design of touch and gesture interfaces. He has also published dozens of other works as invited book chapters and papers in leading international publications and conferences, and is an author of over three dozen patents and pending patent applications. Daniel’s is sought after as an expert witness, and has testified before courts in the United Kingdom and the United States.

There is a proliferation of websites and mobile apps for helping people learn new concepts (e.g. online courses), and learn how to change health habits and behavior (e.g. websites for reducing depression, apps for quitting smoking). How can we use data from real-world users to rapidly enhance and personalize these technologies? I show how we can build self-improving systems by reimagining randomized A/B experimentation as an engine for collaboration, dynamic enhancement, and personalization. I present a novel system that enhanced learning from math problems, through crowdsourcing explanations and automatically experimenting to discover the best. My second application boosted responses to an email campaign, by experimentally discovering how to personalize motivational messages to a user's activity level. These self-improving systems use experiments as a bridge between designers, social-behavioral scientists and researchers in statistical machine learning.

Joseph Jay Williams is a Research Fellow at Harvard's Office of the Vice Provost for Advances in Learning, and a member of the Intelligent Interactive Systems Group in Computer Science. He completed a postdoc at Stanford University in the Graduate School of Education in Summer 2014, working with the Office of the Vice Provost for Online Learning and the Open Learning Initiative. He received his PhD from UC Berkeley in Computational Cognitive Science, where he applied Bayesian statistics and machine learning to model how people learn and reason. He received his B.Sc. from University of Toronto in Cognitive Science, Artificial Intelligence and Mathematics, and is originally from Trinidad and Tobago. More information about his research and papers.

 

Anand  Kulkarni is co-founder and Chief Scientist of LeadGenius, a Y Combinator, Sierra Ventures, and Lumia Capital-backed startup using human computation and deep learning to automate account-based marketing (ABM). LeadGenius has raised over $20M in venture funding and developed best-in-class marketing automation technology used by Fortune 500 customers like Google, eBay, and Box. In conjunction with nonprofits like the World Bank, LeadGenius generates fairly-paid digital employment for over 500 individuals in 40 countries.

Anand has been named as one of Forbes Magazine's "30 under 30" Top 30 Entrepreneurs Under 30. Anand has published over a dozen papers in ACM, AAAI and IEEE magazines, journals, and conferences. Anand previously held a National Science Foundation graduate research fellowship in mathematics. He holds degrees in Industrial Engineering and Operations Research, Mathematics, and Physics from UC Berkeley.

Crowdsourcing envisions computational systems that enable complex collective achievements. However, today's crowdsourcing techniques are limited to goals so simple and modular that their path can be entirely pre-defined. In this talk, I describe crowdsourcing techniques that enable far more complex and open-ended goals, including product design, software engineering, and research. These techniques fluidly assemble participants into flash organizations and continuously adapt their efforts, convene on-demand teams that maximize member familiarity under unpredictable availability and strict time constraints, and coordinate thousands of volunteers around the world in pursuing and publishing top-tier research. This work argues for a shift away from a reductive worldview of crowdsourcing as microtasking or one-off competitions, and toward computational systems that proactively aid large groups in working together nimbly, reactively, and effectively toward complex goals.

Michael Bernstein is an Assistant Professor of Computer Science at Stanford University, where he is a member of the Human-Computer Interaction group. His research focuses on the design of crowdsourcing and social computing systems. This work has received six Best Paper awards and twelve honorable mentions at premier venues in human-computer interaction. Michael has been recognized as a Robert N. Noyce Family Faculty Scholar, and has received an NSF CAREER award, Alfred P. Sloan Fellowship, and Outstanding Academic Title citation from the American Library Association. He holds a bachelor's degree in Symbolic Systems from Stanford University, and a master's and Ph.D. in Computer Science from MIT.

Faculty Host: Niki Kittur

The goal of the learning sciences is to not only understand the phenomena of learning, but also to impact educational practices and enable more effective learning. To meet these goals, learning scientists use iterative and design methods as they design curriculum approaches, learning technologies, and technology-rich learning environments. Until recently, however, the learning sciences community has not focused on design of artifacts for supporting learning as a formal practice, discipline, and field of research. Nowhere is this oversight more evident than with regard to engaging stakeholders actively in the design process. Participatory Design (PD) is a field of research and design that examines how stakeholders are able to participate with designers on the development of tools, artifacts, and activities that are important to the user group.

n this talk DiSalvo explores how participatory design can inform the development of learning technology. DiSalvo will address how participatory design can help in creating value-driven learning experiences through formative participatory design work with marginalized groups, creating probes to better understand how to leverage everyday technology practices for learning, and the use of meta-design to scaffold students incorporating their values into the learning experience.

Dr. Betsy DiSalvo is an Assistant Professor in the School of Interactive Computing at Georgia Institute of Technology. At Georgia Tech she leads the Culture and Technology (CAT) Lab, which focuses on research studying cultural values and how they impact technology use, learning, and production. Currently, the CAT Lab is exploring parents’ use of technology for informal learning. In its first stages, this research is developing an understanding of how and why parents use or don’t choose to use computers, mobile devices, and other technology for learning.

DiSalvo is also the PI for an NSF funded project exploring how maker oriented learning approaches may increase transfer and  reflection in undergraduate computer science courses and a exploring related projects that tie art and technology to increase learning across disciplines. DiSalvo’s work has included the development of the Glitch Game Tester Program and projects for the Carnegie Science Museum, the Children’s Museum of Atlanta, the Children’s Museum of Pittsburgh, Eyedrum Art Center and the Walker Art Center. DiSalvo received a Ph.D. in Human Centered Computing from Georgia Tech in 2012.  Previous to coming to Georgia Tech she was a research scientist at the University of Pittsburgh Learning Research and Development Center.

Link to LiveStream

Collective creativity systems often include modification and recombination mechanisms, forms of remixing. In addition, metasystems can accelerate the exploration of design space. For example, customizers create design families by defining parameter ranges. These customizers can in turn be modified. The effect of this reuse for customization is analyzed in the 3D printing community Thingiverse. The study focuses on the design artifacts and their inheritance history, as well as their shape and semantic distance from each other. Some ways of using coordination and computation to catalyze design space exploration are discussed, with examples from several online communities.

Jeffrey Nickerson is a professor in the School of Business at Stevens Institute of Technology. His current research focuses on collective creativity: through observation and experiment, he seeks to understand how systems composed of humans and machines can explore design space. He has a Ph.D. from New York University in Computer Science, as well as an M.F.A. in Graphic Design from Rhode Island School of Design. He is now working on two NSF-funded projects related to online community-based design and problem solving.

Pages

Subscribe to HCII