HCII

The applications we create are framed by the tools we use to create them. On one hand, tools codify effective practice and empower design. On the other, that same codification eventually constrains design. My research examines new approaches to interactive systems in light of this tradeoff, often with an emphasis on unlocking existing codifications to enable new designs. This talk will focus on three examples:

  • I will first present our work on unlocking data with interactive machine learning. Dominant models of interaction fail to support expressiveness and control in many emerging forms of everyday data. Exploring such domains as web image search and gesture recognition, our work shows how interactive machine learning can support people in extending the underlying language of an interaction.
  • I will then present our work on using pixel-based reverse engineering to unlock existing graphical interfaces, allowing runtime modification of those interfaces without their source. Pixel-based methods allow prototyping new possibilities atop the existing ecosystem of applications and tools, accelerating innovation and informing the next-generation ecosystem.
  •  Finally, I will consider how these challenges combine in the emergence of self-tracking and personal informatics. Data is no longer a distant concept, but an everyday barrier to interaction, self-knowledge, and personal empowerment. The tools we create to support these applications will define the future of everyday interaction with personal data.

Given these examples, I argue research must consider not only specific applications, but also the assumptions codified by underlying tools and how those tools frame our understanding of what application designs are even possible.

James Fogarty is an Associate Professor of Computer Science & Engineering at the University of Washington. His broad research interests are in engineering interactive systems, often with a focus on the role of tools in developing, deploying, and evaluating new approaches to the human obstacles surrounding everyday adoption of ubiquitous computing and intelligent interaction. He is also Director of the DUB Group, the University of Washington's cross-campus initiative advancing research and education in Human-Computer Interaction and Design.

Faculty Host: Scott Hudson

Though one of the main benefits of educational technologies is the opportunity for personalization, when it comes to culture, most media is ‘one size fits all.’ There is much debate about if and how to integrate students' native cultural behaviors into the classroom, particularly regarding the use of non-Standard English dialects that are heavily stigmatized within contemporary society. Over the past five years, we have examined the impact of using students’ native dialects of English within the design of a social educational technology called a Virtual Peer (VP). We have found that African American students who speak a dialect of English called African American Vernacular English (AAVE) perform better science after speaking with a VP that uses both AAVE and Standard English, rather than one that uses Standard English exclusively. Furthermore, those students who worked with a bidialectal VP were less likely to demonstrate agent abuse - a factor which was negatively correlated to students' science performance.

In the proposed research, we will augment our current understanding of this phenomenon by collecting data on students' perceived social relationships with the VP. We hypothesize that AAVE-speaking students who work with a bidialectal VP will report higher rapport with the agent. We additionally hypothesize that this social relationship score will be a significant predictor of students' science performance. The primary contribution of this work is (1) to demonstrate the impact of one particular culturally-based design choice, dialect, within an educational technology on students' resulting science performance and social behavior, and (2) explore the role of social relationship as a mediating factor between agent dialect and student performance. This work will provide evidence toward a long-standing debate within the field of educational about the role of dialect in students' learning. We believe that this work has implications not just for the future design of educational technologies, but also the design of even non-technological learning materials more broadly.

Thesis Committee:
Justine Cassell (Chair, LTI)
Amy Ogan
Marti Louw
Sandra Calvert (Georgetown University)

There are estimated to be more than a million Deaf and severely hard of hearing individuals living in the United States. For many of these individuals, American Sign Language (ASL) is their primary means of communication. However, for most day-to-day interactions, native-ASL users must either get by with a mixture of gestures and written communication in a non-native language or seek the assistance of an interpreter. Whereas advances towards automated translation between many other languages have benefitted greatly from decades of research into speech recognition and Statistical Machine Translation, ASL’s lack of aural and written components have limited exploration into automated translation of ASL.

Previous research efforts into sign language detection have met with limited success primarily due to inaccurately tracking handshapes. Without this vital component, research into ASL detection has been limited to focusing on isolated components of ASL or restricted vocabulary sets that reduce the need for accurate handtracking. However, improvements in 3D cameras and advances in handtracking techniques provide reasons to believe some of the technical sensing limitations may no longer exist. By combining state of the art handtracking techniques with ASL language modeling, there is an unexplored opportunity to develop a system capable of fully capturing ASL.

In this work, I propose to develop the first ASL translation system capable of detecting all five necessary parameters of ASL (Handshape, Hand Location, Palm Orientation, Movement, and Non-Manual Features). This work will build on existing handtracking techniques and explore the features that are best capable of discriminating the 40 distinct handshapes used in ASL. An ASL language model will be incorporated into the detection algorithm to improve sign detection. Finally, the system will output a form of transcribed ASL that will allow for the separation of sign detection and ASL-to-English language translation.

Thesis Committee:
Dan Siewiorek (Chair)
Anind Dey
Carolyn Rose
Roberta Klatzky (Psych/HCII)
Asim Smailagic (ECE)

Copy of Proposal Document

Text entry is an important form of input regularly performed by computer users. However, there are many situations in which users might not be able to enter text using a physical QWERTY keyboard. One aspect of my research over the past 5 years has focused specifically on how to enable users to input text in alternate ways. My research in this space has resulted in a variety of novel text input methods, such as gesture based approaches to support eyes-free text entry and dwell-free eye-typing. However, in this talk, I will discuss findings from formative studies to understand the special requirements and capabilities of the users when they are unable to use a physical keyboard suggest that users need additional support for communicating with others. I will discuss the important challenges that must be addressed in the development of novel sensor based communication tools and future directions that should be considered.

Khai Truong is an Associate Professor in the Department of Computer Science at the University of Toronto. Khai received a Ph.D. degree in Computer Science and a Bachelor degree in Computer Engineering with highest honors from the Georgia Institute of Technology. He has been an active ubicomp researcher for nearly 20 years. His research interest lies at the intersection of human-computer interaction (HCI) and ubiquitous computing, and investigate tools and methods to support the development of novel ubiquitous computing systems and techniques and models to facilitate user interactions with off-the-desktop computing devices and services. His current work also includes the design and evaluation of assistive technologies and context sensing applications.

Faculty Host: Jason Hong

View the livestream

Intelligent Tutoring Systems are effective for improving students' learning outcomes. However, constructing tutoring systems that are pedagogically effective has been widely recognized as a challenging problem. What is needed is a tool that leverages prior learning science theory to support tutor design, building, and testing. The proposed thesis explores how computational models of apprentice learning, or computer models that learn interactively from worked examples and correctness feedback, can be used to support these tutor development phases.

In my prior work, I created the Apprentice Learner Architecture, which leverages a computational theory of apprentice learning to instantiate alternative models that align with the theory. I have used this architecture to search for two kinds of models: 1) models that fit human behavior and  2) efficient models. Instructional designers can use human-like models as learner "crash dummies" to simulate students interacting with the tutors. I have used one of these models to correctly predict which of two fractions tutor designs will yield better student performance. In other work, I have used efficient models to make tutor authoring easier for non-programmers. Like humans, apprentice learner models can be taught by domain experts through worked examples and feedback. I showed that the time needed to author an Algebra tutor by interactively training an apprentice learner model is less than half the time needed to author a tutor using another state-of-the-art authoring-by-demonstration approach.

In my proposed work, I plan to develop new apprentice learner models that better fits the human tutor data than my initial models, and I aim to show the variety of ways that simulated data from these models can be used as a substitute for actual classroom data. Next, I plan to demonstrate the generality of these models by simulating student behavior in seven tutoring systems that teach multiple kinds of knowledge across multiple domains. I will use each tutor to test different aspects of my models and the computational theory underlying them. Finally, I plan to showcase the authoring capabilities of apprentice learner models by using them to author tutoring systems for two complex domains, experimental design and Python programming. Ultimately, the goal of this work is to develop a Model Human Learner—similar to Card, Moran, and Newell's (1986) Model Human Processor—that encapsulates psychological and learning science findings in a format that researchers and instructional designers can use to create effective tutoring systems.

Thesis Committee:
Ken R. Koedinger (Chair)
Vincent Aleven
John R. Anderson (Psychology/HCII)
Pat Langley, External (University of Auckland)

Copy of Proposal Document

Inspired by cybernetics and artificial intelligence researchers who modeled intelligence in hardware and software, architects in the 1960s and 70s applied computational practices to interfaces, rooms, buildings, and cities. In so doing, they began to build feedback, cognition and intelligence into their work at the level of their design processes and their interactions with the user. Some modeled cybernetics and artificial intelligence in their architecture and design projects; others engaged directly and shared funding with cyberneticists and AI researchers. The process worked the other way as well: as technologists and system designers sought to address complex problems in the real world, they turned to architecture and architectural metaphors. This collaborative, hybrid space between architecture and computation gave rise to a new, hybrid interactivity that didn’t belong to one field or set of practices alone. It poured the foundations for interaction design and HCI conventions—and has ramifications for contemporary questions on the impact of machine learning and other AI practices.

Dr. Molly Wright Steenson is an associate professor in the CMU School of Design. She is the author of the forthcoming book Architecting Interactivity (MIT Press, 2017). She also leads the Doctor of Design (DDes) program and has a courtesy appointment with the School of Architecture. Prior to CMU, she was an assistant professor of journalism at the University of Wisconsin-Madison, an adjunct faculty member at Art Center College of Design in Pasadena, and a resident associate professor at the Interaction Design Institute Ivrea in Ivrea, Italy in the early 2000s. She has worked professionally with the web for Fortune 500 & innovative startups since 1995. Steenson holds a PhD in architecture from Princeton University and a Master's in Environmental Design from the Yale School of Architecture.

Faculty Host: Jodi Forlizzi

At ANSYS, we create simulation software that is a key component of the product development process, helping to validate the effectiveness of designs before they are built. Simulation techniques impact all types of products, from automobiles, to circuits, to pipes, to airplanes. ANSYS is working to create a vision where simulation software doesn’t rest in the hands of a few expert users, but rather is democratized so that all engineers are empowered to incorporate this into their decision making on product design.

In order to achieve this vision, simulation software must change to be flexible to the needs of both the highly trained engineer who is a master in the software, and an engineer who wants a simple and streamlined way to run simulations and retrieve information that will help them make a decision. To understand the different workflows of these engineers, goals, needs and successfully incorporate them into the next generation of simulation software, utilizing the user-centered design process will be key. Like many companies that create enterprise software, user experience techniques can be foreign, and there is often confusion, resistance, even possibly hostility towards changing the way the software is created. We have made a thoughtful approach to introduce aspects of the user-centered design approach to improve and supplement the way ANSYS creates simulation software, and to help ensure our vision is achieved.

Imran Riaz is a passionate and experienced UX leader.  He is currently leading the UX group for ANSYS, a multi-Billion-dollar software company based right here in Pittsburgh. In his current role he manages a diverse and multi-disciplinary team for Design Research, Product Design and Experience Strategy practices. He has been working in the software industry for over 20 years with an emphasis on great user experience as well as promoting the UX discipline itself. With that purpose in mind in 2010, Imran founded the Midwest UX conference to promote UX leadership and local UX in the Midwest area.  He also served as the North America Director for UXPA. He was the founding member and President of the UXPA Columbus chapter.

In his spare time, he focuses on volunteering for the promotion of educational and other worthy causes. He is a UX leader with a local presence, but with global reach. He is a public speaker on UX and leadership, and is active at local and national levels.

Faculty Host: Brad Myers

View the livestream.

ZHEN BAI
Fostering Curiosity Through Peer Support in Collaborative Science Learning

Curiosity is a key motivational factor in learning. This is, however, often neglected in many classrooms, especially larger and inner-city classrooms, which have instead become very test-oriented. This project focuses on designing learning technologies that foster and maintain curiosity, exploration and self-efficacy in scientific inquiry. In particular, we are interested in understanding social factors among peers that evoke students’ desire for new knowledge, and encourage knowledge seeking through hands-on and collaborative activities. In this talk, I will present the theoretical foundation of curiosity, followed by our on-going work of human-human behavior analysis in small group science learning, and discuss how the theoretical and empirical work lead toward the development of a computational model of curiosity that enables an embodied conversational virtual peer to sense and scaffold curiosity for elementary and middle school students.

SONIYA GADGIL-SHARMA
Insights from Personalized Learning Product Efficacy Pilots in K-12 Education

With recent advances in technology, adoption of personalized learning practices has greatly increased in K-12 education across the United States. Personalized learning shows great promise in leveling the playing field for underserved students by offering instruction tailored to their specific competencies (Pane et al., 2015). Yet, schools face significant hurdles in choosing educational technology products well aligned with their curricular goals and objectives from the multitude of options available (Morrison et al., 2014). Existing evidence about educational technology product effectiveness is scarce and school district leaders struggle to access, validate, and apply findings to their unique settings. In this short seminar, I will present some key findings from product efficacy pilot studies of educational technology products conducted at two school districts in the Pittsburgh area. We used a mixed methods approach to evaluate product efficacy on four key dimensions — student learning, student engagement, teacher support, and teacher satisfaction. I will highlight the contrasts between the two pilots, and describe key factors that led to a successful pilot.

HERNISA KACORRI
Supporting Orientation and Object Recognition for Blind People

In the field of assistive technology, large scale user studies are hindered by the fact that potential participants are ge-ographically sparse and longitudinal studies are often time consuming. In our work, we rely on remote usage data to perform large scale and long duration behavior analysis on users of iMove, a mobile app that supports the ori-entation of blind people. Our analysis provides insights into iMove’s user base and can inform decisions for tailoring the app to diverse user groups, developing future improvements of the software, or guiding the design process of similar assistive tools. Another important factor for independent living of blind people is object identification. Blind people often need to identify objects around them, from packages of food to items of clothing. We explore personal object recognizers, where blind people train a mobile application with a few snapshots of objects of interest and provide custom labels. We adopt transfer learning with a deep learning system for userdefined multi-label k-instance classification. Experiments with blind participants demonstrate the feasibility of our approach, which reaches accuracies over 90% for some participants.

Speaker Bios

Zhen Bai is a post-doctoral fellow at the ArticuLab. She leads the Sensing Curiosity in Play and Responding (SCIPR) project, which focuses on exploring the design space of playful learning environments that foster curiosity, exploration and self-efficacy for science education. Zhen is passionate to design innovative interfaces that augment our cognitive, emotional and social experiences in a playful and accessible way. Her research interests include augmented reality, tangible interfaces, design for children, developmental psychology, education, and computer-supported collaborative work. She received her Ph.D. in Computer Science from the Graphics & Interaction Group at the University of Cambridge in 2015. Her Ph.D. research focused on designing augmented and tangible interfaces that support symbolic play for young children with and without autism spectrum condition.

Soniya Gadgil earned her doctorate in cognitive psychology with a focus on learning and higher order cognition from University of Pittsburgh in 2014. Her research focuses on applying cognitive science principles to the design, development, implementation, and assessment of educational technology products. In a current role as post-doctoral researcher at the Learning Media Design Center, Soniya supports three school districts in the Pittsburgh area in conducting product efficacy pilots of educational technologies, and implementing rapid cycle feedback loops with developers of said technologies.

Hernisa Kacorri is a Postdoctoral Fellow at the Human-Computer Interaction Institute at Carnegie Mellon University. As a member of the Cognitive Assistance Lab she works with Chieko Asakawa, Kris Kitani, and Jeffrey Bigham to help people with visual impairment understand the surrounding world. She recently received her Ph.D. in Computer Science from the Graduate Center CUNY, as a member of the Linguistic and Assistive Technologies Lab at CUNY and RIT, advised by Matt Huenerfauth. Her dissertation focused on developing mathematical models of human facial expressions for synthesizing animations of American Sign Language that are linguistically accurate and easy to understand. As part of the emerging field of human-data interaction, her work lies at the intersection of accessibility, computational linguistics, and applied machine learning. Her research was supported by NSF, CUNY Science Fellowship, and Mina Rees Dissertation Fellowship in the Sciences. During her Ph.D. Hernisa also visited, as a research intern, the Accessibility Research Group at IBM Research – Tokyo (2013) and the Data Science and Technology Group at Lawrence Berkeley National Lab (2015).

Seminar Video

We will examine the gaming industry and how humans, computers and systems interact, drawing from her background in games, theatre and entertainment technology.

Brooke White is the Senior Director of UX Research for Yahoo for all consumer products as well as advertising platforms and services. Her previous position was Senior Manager, Games User Research for Disney Interactive. In fact, Brooke started and led user research practices for three different companies: Yahoo, Disney and Volition/THQ. Brooke has decades of experience spanning research, marketing and production in desktop, console and mobile games.

Faculty Hosts: Justine Cassell, Brad Myers

Reception to Follow

Over the past five years, my group—and probably many of you—have experienced a dramatically-increased ability to do Design at Large: creating research that is widely used by real people and learning a ton from the experience. One shift that happens when we move from designing artifacts in the lab to designing experiences at large is that inevitably, what we end up studying are complex sociotechnical systems. A lot of the behavior is emergent, and sometimes completely unexpected. The successes in this new world are tremendously exciting, but like all creative endeavors, there are lots of failures. One contributing factor is that designers often receive guidance that’s based on faith rather than insight. We may be able to do better by building up a body of knowledge through design at large. In this talk, I’ll try and distill some insights into this shift. I’ll draw on examples from research from my group and others, as well as my students and colleagues experiences with startups.

 

Speaker Bio
Scott Klemmer is an Associate Professor of Cognitive Science and Computer Science & Engineering at UC San Diego, where he is a co-founder and co-director of the Design Lab. He previously served as Associate Professor of Computer Science at Stanford, where he co-directed the HCI Group, held the Bredt Faculty Scholar chair, and was a founding participant in the d.school. Scott has a dual BA in Art-Semiotics and Computer Science from Brown (with Graphic Design work at RISD), and a PhD in CS from Berkeley. His former graduate students are leading professors (at Berkeley, CMU, UCSD, & UIUC), researchers (Google & Adobe), founders (including Instagram & Pulse), social entrepreneurs, and engineers. He helped introduce peer assessment to online education, and created the first such online course. More than 200,000 have signed up for his interaction design class & specialization.

He has been awarded the Katayanagi Emerging Leadership Prize, Sloan Fellowship, NSF CAREER award, and Microsoft Research New Faculty Fellowship. Nine of his papers were awarded best paper or honorable mention at top HCI venues. He is on the editorial board of HCI and TOCHI; was program co-chair for UIST, the CHI systems area, and HCIC; and serves on the Learning at Scale steering committee. He advises university design programs globally. Organizations worldwide use his group’s open-source design tools and curricula.

View the LiveStream

Pages

Subscribe to HCII