HCII

The vision of a smart environment, where invisible technologies seamlessly support people’s daily activities, is closer to becoming reality. After decades of research, smart things have become commonly available and are being adopted into people’s homes.

Despite of the commercial optimism, several long-standing challenges remain unresolved. One such challenge is the “intelligibility” problem (Bellotti, 2001): how people can understand smart systems, and vice versa. Time and time again, studies of people living in smart environments revealed mutual misunderstanding — smart things fail to decipher intents behind sensed user behaviours, and people fail to understand the reasoning behind actions taken by smart things. Without making the machine intelligence comprehensible, systems of smart things can never become truly useful, usable and enjoyable.

In this talk, I will share our thoughts on “aesthetic of intelligence”, a phrase coined by Stephan Wensveen during a discussion: how to design user experience for systems of smart things, so that people perceive the machine intelligence behind such systems to be aesthetically pleasing — rather than autonomously making “ugly” decisions for them!

Lin-Lin Chen is professor in the department of industrial and commercial design at National Taiwan University of Science and Technology (Taiwan Tech) and in the faculty of industrial design at Eindhoven University of Technology in the Netherlands. She received B.S. degree from National Cheng Kung University in Taiwan, and Ph.D. from the University of Michigan at Ann Arbor in the United States. She was dean of the college of design at Taiwan Tech from 2004 to 2010, president of the Chinese Institute of Design from 2007 to 2008, and convener for the arts (and design) area committee of Taiwan’s National Science Council (now Ministry of Science and Technology) from 2009 to 2011. She is currently editor-in-chief of the International Journal of Design (SCI, SSCI, AHCI), vice president of the International Association of Societies of Design Research (IASDR), and fellow of the Design Research Society. Her research focuses on designing user experience of smart things, product aesthetics, interdisciplinary collaboration, and design innovation strategy.

Faculty Host: John Zimmerman

Live Stream

The history of computing is rich with examples of how computers, among their many purposes, serve as tools which enhance our ability to learn. As these computing technologies advance, so too do the ways in which we learn. Today, we are moving faster than ever towards Weiser’s seminal vision of technology being woven into the fabric of our everyday lives. Not only have we adopted mobile, and more recently, wearable technologies, that we depend on almost every hour of our waking lives, there is an internet, and more recently, an Internet of Things, that connects us to each other and our surrounding environments. This unique combination of instrumentation and connectivity offers an opportunity to fundamentally change the way in which we learn and share knowledge with one another.

In this talk, I will outline my research in what I define as next-generation learning experiences, which leverage instrumented and connected environments to aid in human learning and performance. I first demonstrate how instrumented and connected environments can be used to improve the way in which we learn to use complex software applications. I then discuss how wearable and IoT technologies can be similarly leveraged to aid in the learning, performance, and coordination of real-world physical tasks.

Tovi Grossman is a Distinguished Research Scientist at Autodesk Research, located in downtown Toronto. Dr. Grossman’s research is in HCI, focusing on input and interaction with new technologies. In particular, he has been exploring how emerging technologies, such as wearables, the Internet of Things, and gamification can be leveraged to enhance learning and knowledge sharing for both software applications and real-world physical tasks. This work has led to a number of technologies now in Autodesk products used by millions of users, such as Autodesk Screencast and Autodesk ToolClip™ videos. Dr. Grossman received a Ph.D. in Human-Computer Interaction from the Department of Computer Science at the University of Toronto. He has over 80 peer-reviewed journal and conference publications. Fourteen of these publications have received best paper awards and nominations at the ACM UIST and CHI conferences. He has also served as the Technical Program Co-Chair for the ACM CHI 2014 Conference, and the Program Co-Chair for the ACM UIST 2015 Conference.

Please join us for the inaugural Human-Computer Interaction Institute Demonstration Day.

HCII Demo Day is an afternoon event, taking place from 2:00 PM - 6:00 PM.  Attendees can interact with research project demos in all of our lab spaces across the Carnegie Mellon University campus, as well as enjoy numerous opportunities to meet our faculty and students in our Ph.D., master's and bachelor programs. The evening will culminate with a student poster session and reception.

There is no registration fee, but ask that your REGISTER for the HCII Demo Day.

The applications we create are framed by the tools we use to create them. On one hand, tools codify effective practice and empower design. On the other, that same codification eventually constrains design. My research examines new approaches to interactive systems in light of this tradeoff, often with an emphasis on unlocking existing codifications to enable new designs. This talk will focus on three examples:

  • I will first present our work on unlocking data with interactive machine learning. Dominant models of interaction fail to support expressiveness and control in many emerging forms of everyday data. Exploring such domains as web image search and gesture recognition, our work shows how interactive machine learning can support people in extending the underlying language of an interaction.
  • I will then present our work on using pixel-based reverse engineering to unlock existing graphical interfaces, allowing runtime modification of those interfaces without their source. Pixel-based methods allow prototyping new possibilities atop the existing ecosystem of applications and tools, accelerating innovation and informing the next-generation ecosystem.
  •  Finally, I will consider how these challenges combine in the emergence of self-tracking and personal informatics. Data is no longer a distant concept, but an everyday barrier to interaction, self-knowledge, and personal empowerment. The tools we create to support these applications will define the future of everyday interaction with personal data.

Given these examples, I argue research must consider not only specific applications, but also the assumptions codified by underlying tools and how those tools frame our understanding of what application designs are even possible.

James Fogarty is an Associate Professor of Computer Science & Engineering at the University of Washington. His broad research interests are in engineering interactive systems, often with a focus on the role of tools in developing, deploying, and evaluating new approaches to the human obstacles surrounding everyday adoption of ubiquitous computing and intelligent interaction. He is also Director of the DUB Group, the University of Washington's cross-campus initiative advancing research and education in Human-Computer Interaction and Design.

Faculty Host: Scott Hudson

Though one of the main benefits of educational technologies is the opportunity for personalization, when it comes to culture, most media is ‘one size fits all.’ There is much debate about if and how to integrate students' native cultural behaviors into the classroom, particularly regarding the use of non-Standard English dialects that are heavily stigmatized within contemporary society. Over the past five years, we have examined the impact of using students’ native dialects of English within the design of a social educational technology called a Virtual Peer (VP). We have found that African American students who speak a dialect of English called African American Vernacular English (AAVE) perform better science after speaking with a VP that uses both AAVE and Standard English, rather than one that uses Standard English exclusively. Furthermore, those students who worked with a bidialectal VP were less likely to demonstrate agent abuse - a factor which was negatively correlated to students' science performance.

In the proposed research, we will augment our current understanding of this phenomenon by collecting data on students' perceived social relationships with the VP. We hypothesize that AAVE-speaking students who work with a bidialectal VP will report higher rapport with the agent. We additionally hypothesize that this social relationship score will be a significant predictor of students' science performance. The primary contribution of this work is (1) to demonstrate the impact of one particular culturally-based design choice, dialect, within an educational technology on students' resulting science performance and social behavior, and (2) explore the role of social relationship as a mediating factor between agent dialect and student performance. This work will provide evidence toward a long-standing debate within the field of educational about the role of dialect in students' learning. We believe that this work has implications not just for the future design of educational technologies, but also the design of even non-technological learning materials more broadly.

Thesis Committee:
Justine Cassell (Chair, LTI)
Amy Ogan
Marti Louw
Sandra Calvert (Georgetown University)

There are estimated to be more than a million Deaf and severely hard of hearing individuals living in the United States. For many of these individuals, American Sign Language (ASL) is their primary means of communication. However, for most day-to-day interactions, native-ASL users must either get by with a mixture of gestures and written communication in a non-native language or seek the assistance of an interpreter. Whereas advances towards automated translation between many other languages have benefitted greatly from decades of research into speech recognition and Statistical Machine Translation, ASL’s lack of aural and written components have limited exploration into automated translation of ASL.

Previous research efforts into sign language detection have met with limited success primarily due to inaccurately tracking handshapes. Without this vital component, research into ASL detection has been limited to focusing on isolated components of ASL or restricted vocabulary sets that reduce the need for accurate handtracking. However, improvements in 3D cameras and advances in handtracking techniques provide reasons to believe some of the technical sensing limitations may no longer exist. By combining state of the art handtracking techniques with ASL language modeling, there is an unexplored opportunity to develop a system capable of fully capturing ASL.

In this work, I propose to develop the first ASL translation system capable of detecting all five necessary parameters of ASL (Handshape, Hand Location, Palm Orientation, Movement, and Non-Manual Features). This work will build on existing handtracking techniques and explore the features that are best capable of discriminating the 40 distinct handshapes used in ASL. An ASL language model will be incorporated into the detection algorithm to improve sign detection. Finally, the system will output a form of transcribed ASL that will allow for the separation of sign detection and ASL-to-English language translation.

Thesis Committee:
Dan Siewiorek (Chair)
Anind Dey
Carolyn Rose
Roberta Klatzky (Psych/HCII)
Asim Smailagic (ECE)

Copy of Proposal Document

Text entry is an important form of input regularly performed by computer users. However, there are many situations in which users might not be able to enter text using a physical QWERTY keyboard. One aspect of my research over the past 5 years has focused specifically on how to enable users to input text in alternate ways. My research in this space has resulted in a variety of novel text input methods, such as gesture based approaches to support eyes-free text entry and dwell-free eye-typing. However, in this talk, I will discuss findings from formative studies to understand the special requirements and capabilities of the users when they are unable to use a physical keyboard suggest that users need additional support for communicating with others. I will discuss the important challenges that must be addressed in the development of novel sensor based communication tools and future directions that should be considered.

Khai Truong is an Associate Professor in the Department of Computer Science at the University of Toronto. Khai received a Ph.D. degree in Computer Science and a Bachelor degree in Computer Engineering with highest honors from the Georgia Institute of Technology. He has been an active ubicomp researcher for nearly 20 years. His research interest lies at the intersection of human-computer interaction (HCI) and ubiquitous computing, and investigate tools and methods to support the development of novel ubiquitous computing systems and techniques and models to facilitate user interactions with off-the-desktop computing devices and services. His current work also includes the design and evaluation of assistive technologies and context sensing applications.

Faculty Host: Jason Hong

View the livestream

Intelligent Tutoring Systems are effective for improving students' learning outcomes. However, constructing tutoring systems that are pedagogically effective has been widely recognized as a challenging problem. What is needed is a tool that leverages prior learning science theory to support tutor design, building, and testing. The proposed thesis explores how computational models of apprentice learning, or computer models that learn interactively from worked examples and correctness feedback, can be used to support these tutor development phases.

In my prior work, I created the Apprentice Learner Architecture, which leverages a computational theory of apprentice learning to instantiate alternative models that align with the theory. I have used this architecture to search for two kinds of models: 1) models that fit human behavior and  2) efficient models. Instructional designers can use human-like models as learner "crash dummies" to simulate students interacting with the tutors. I have used one of these models to correctly predict which of two fractions tutor designs will yield better student performance. In other work, I have used efficient models to make tutor authoring easier for non-programmers. Like humans, apprentice learner models can be taught by domain experts through worked examples and feedback. I showed that the time needed to author an Algebra tutor by interactively training an apprentice learner model is less than half the time needed to author a tutor using another state-of-the-art authoring-by-demonstration approach.

In my proposed work, I plan to develop new apprentice learner models that better fits the human tutor data than my initial models, and I aim to show the variety of ways that simulated data from these models can be used as a substitute for actual classroom data. Next, I plan to demonstrate the generality of these models by simulating student behavior in seven tutoring systems that teach multiple kinds of knowledge across multiple domains. I will use each tutor to test different aspects of my models and the computational theory underlying them. Finally, I plan to showcase the authoring capabilities of apprentice learner models by using them to author tutoring systems for two complex domains, experimental design and Python programming. Ultimately, the goal of this work is to develop a Model Human Learner—similar to Card, Moran, and Newell's (1986) Model Human Processor—that encapsulates psychological and learning science findings in a format that researchers and instructional designers can use to create effective tutoring systems.

Thesis Committee:
Ken R. Koedinger (Chair)
Vincent Aleven
John R. Anderson (Psychology/HCII)
Pat Langley, External (University of Auckland)

Copy of Proposal Document

Inspired by cybernetics and artificial intelligence researchers who modeled intelligence in hardware and software, architects in the 1960s and 70s applied computational practices to interfaces, rooms, buildings, and cities. In so doing, they began to build feedback, cognition and intelligence into their work at the level of their design processes and their interactions with the user. Some modeled cybernetics and artificial intelligence in their architecture and design projects; others engaged directly and shared funding with cyberneticists and AI researchers. The process worked the other way as well: as technologists and system designers sought to address complex problems in the real world, they turned to architecture and architectural metaphors. This collaborative, hybrid space between architecture and computation gave rise to a new, hybrid interactivity that didn’t belong to one field or set of practices alone. It poured the foundations for interaction design and HCI conventions—and has ramifications for contemporary questions on the impact of machine learning and other AI practices.

Dr. Molly Wright Steenson is an associate professor in the CMU School of Design. She is the author of the forthcoming book Architecting Interactivity (MIT Press, 2017). She also leads the Doctor of Design (DDes) program and has a courtesy appointment with the School of Architecture. Prior to CMU, she was an assistant professor of journalism at the University of Wisconsin-Madison, an adjunct faculty member at Art Center College of Design in Pasadena, and a resident associate professor at the Interaction Design Institute Ivrea in Ivrea, Italy in the early 2000s. She has worked professionally with the web for Fortune 500 & innovative startups since 1995. Steenson holds a PhD in architecture from Princeton University and a Master's in Environmental Design from the Yale School of Architecture.

Faculty Host: Jodi Forlizzi

At ANSYS, we create simulation software that is a key component of the product development process, helping to validate the effectiveness of designs before they are built. Simulation techniques impact all types of products, from automobiles, to circuits, to pipes, to airplanes. ANSYS is working to create a vision where simulation software doesn’t rest in the hands of a few expert users, but rather is democratized so that all engineers are empowered to incorporate this into their decision making on product design.

In order to achieve this vision, simulation software must change to be flexible to the needs of both the highly trained engineer who is a master in the software, and an engineer who wants a simple and streamlined way to run simulations and retrieve information that will help them make a decision. To understand the different workflows of these engineers, goals, needs and successfully incorporate them into the next generation of simulation software, utilizing the user-centered design process will be key. Like many companies that create enterprise software, user experience techniques can be foreign, and there is often confusion, resistance, even possibly hostility towards changing the way the software is created. We have made a thoughtful approach to introduce aspects of the user-centered design approach to improve and supplement the way ANSYS creates simulation software, and to help ensure our vision is achieved.

Imran Riaz is a passionate and experienced UX leader.  He is currently leading the UX group for ANSYS, a multi-Billion-dollar software company based right here in Pittsburgh. In his current role he manages a diverse and multi-disciplinary team for Design Research, Product Design and Experience Strategy practices. He has been working in the software industry for over 20 years with an emphasis on great user experience as well as promoting the UX discipline itself. With that purpose in mind in 2010, Imran founded the Midwest UX conference to promote UX leadership and local UX in the Midwest area.  He also served as the North America Director for UXPA. He was the founding member and President of the UXPA Columbus chapter.

In his spare time, he focuses on volunteering for the promotion of educational and other worthy causes. He is a UX leader with a local presence, but with global reach. He is a public speaker on UX and leadership, and is active at local and national levels.

Faculty Host: Brad Myers

View the livestream.

Pages

Subscribe to HCII