HCII

Embodied empathic agents are characters that, by their actions and behaviours, are able to show empathy (or not) for other characters; and/or characters that, by their appearance, situation, and behaviour, are able to trigger empathic reactions in the user.

In this talk we discuss the theory behind Embodied Empathic Agents ­whether graphical or robotic­ what we can do now, and what open research questions remain. What are the key theoretical and technological advances already made, and which are still needed? Are Empathic Agents good in tutorial applications, or are there problematic issues?  How do we know?

We use the recent EMOTE project which involved building an empathic robot tutor as a case study in exploring these issues.

Professor Ruth Aylett researches affective agent architectures, intelligent graphical characters and social robots among other topics. She has led and taken part in a succession of mostly European-funded projects in these areas, with a number of educational applications, such as the FearNot! anti-bullying system, the Traveller system for teaching inter-cultural sensitive, and the EMOTE empathic robot tutor. She has implemented a number of cognitively-inspired architectures and explores middle-out approaches in which goal-directed and sensor-directed components can work together to combine direction with responsiveness in intelligent agents. She has around 250 per-reviewed publications in these areas.

Social capital is a construct describing the resources one can draw from social network connections. This talk will describe social capital as a concept, and the ways it has been described, operationalized, and designed in HCI research. Using Paul Resnick’s classic “Bowling Together” article as a frame, we will look at how HCI researchers have described the affordances of computing systems that might promote new forms of social capital, and then dive deeply into recent research by Lampe and colleagues on the use of Facebook for building social capital through active engagement.

Cliff Lampe is an Associate Professor in the School of Information at the University of Michigan. His research examines the positive outcomes of interaction in online communities, ranging from development of interpersonal relationships, to nonprofit collective action, to new forms of civic engagement. His work on Facebook and social capital has been heavily cited in a range of disciplines. Dr. Lampe serves as the Vice President of Publications for SIGCHI, the Technical Program Chair for CHI2017, and the Steering Committee Chair for the CSCW community.

Faculty Host: Robert Kraut

I. Michael Eagle
Predicting Individual Differences for Learner Modeling in Intelligent Tutors from Previous Learner Activities

This study examines how accurately individual student differences in learning can be predicted from prior student learning activities. Bayesian Knowledge Tracing (BKT) predicts learner performance well and has often been employed to implement cognitive mastery. Standard BKT individualizes parameter estimates for knowledge components, but not for learners. Studies have shown that individualizing parameters for learners improves the quality of BKT fits and can lead to very different (and potentially better) practice recommendations. These studies typically derive best-fitting individualized learner parameters from learner performance in existing data logs, making the methods difficult to deploy in actual tutor use. In this work, we examine how well BKT parameters in a tutor lesson can be individualized based on learners’ prior performance in reading instructional text, taking a pretest, and completing an earlier tutor lesson. We find that best-fitting individual difference estimates do not directly transfer well from one tutor lesson to another, but that predictive models incorporating variables extracted from prior reading, pretest and tutor activities perform well, when compared to a standard BKT model and a model with best-fitting individualized parameter estimates.

II. Swarup Kumar Sahoo
Managing Privacy of Mobile Applications

Mobile applications are very privacy-invasive today. Apps request lot of private and sensitive data without properly informing the users about how it will be used. We are building various tools and techniques as part of privacy-enhanced android to give users full control over their private data. One main focus of our work is about making purposes of private data use an essential part of apps and use them to detect/prevent potential privacy issues. Our current approach is to have the apps explicitly declare purpose of why sensitive data is being used and then use static/dynamic program analysis and machine learning techniques to check/enforce purpose of private data. We are also building new kind of user interfaces leveraging purposes and using crowdsourcing techniques to help users make informed decisions about configuring their privacy preferences for various apps.

III. Ran Liu
Bridging the Gap Between Educational Data Mining and Improved Classroom Instruction

The increasing use of educational technologies in classrooms is producing vast amounts of process data that capture rich information about learning as it unfolds. The field of Educational Data Mining (EDM) has made great progress in using the information present in log data to build models that improve instruction and advance the science of learning. However, there have been some limitations. The data used to produce such models has been frequently limited to the actions that education technologies themselves can log. A major challenge in incorporating more contextually-rich data streams into models of learning is collecting and integrating data from different sources and at different grain sizes. In my first talk, I will present methodological advances we have made in automating the integration of log data with additional multi-modal (e.g., audio, screen video, webcam video) data streams. I will also show a case study of how including the multi-modal streams in data analysis can improve the predictive fit of student models and yield important pedagogical implications. More broadly, this work represents an advancement in integrating rich qualitative details of students’ learning contexts into the quantitative approaches of EDM research. Another limitation of EDM research thus far is that findings remain largely theoretical with respect to their impact on learning outcomes and efficiency. The most important, rigorous, and firmly grounded evaluation of a data-driven discovery is whether it leads to modifications to education that produce better student learning. Such an evaluation has been referred to as "closing the loop" (e.g., Koedinger et al., 2013), as it completes cycle of system design, deployment, data analysis, and discovery leading back to design. In my second talk, I present new results that “close the loop” (via a classroom-implemented randomized controlled trial) on a data-driven, machine-automated method of improving knowledge component models.

SANGWONG BAE
Using Passively Collected Sedentary Behavior to Predict Hospital Readmission

Hospital readmissions are a major problem facing health care systems today, costing Medicare alone US$26 billion each year. Being readmitted is associated with significantly shorter survival, and is often preventable. Predictors of readmission are still not well understood, particularly those under the patient’s control: behavioral risk factors. Our work evaluates the ability of behavioral risk factors, specifically Fitbit-assessed behavior, to predict readmission for 25 postsurgical cancer inpatients. Our results show that sum of steps, maximum sedentary bouts, frequency and low breaks in sedentary times during waking hours are strong predictors of readmission. We built two models for predicting readmissions: Steps-only and Behavioral model that adds information about sedentary behaviors. The Behavioral model (88.3%) outperforms the Steps-only model (67.1%), illustrating the value of passively collected information about sedentary behaviors. Indeed, passive monitoring of behavior data, i.e., mobility, after major surgery creates an opportunity for early risk assessment and timely interventions.

Dr. Sangwon Bae is a postdoctoral researcher in the Human–Computer Interaction Institute at Carnegie Mellon University. She earned her Ph.D. in Cognitive Science and Engineering from the Yonsei University and worked at SK and Samsung before returning to academia. Her research has focused on using smartphones and wearable trackers to understand human perception and to develop models of behavior based on mobile systems. The goal of her work is to examine the feasibility and acceptability of collecting continuously-sensed contextual information and active patient-reported symptom reports and to use these types of information to develop algorithms to accurately predict behavior for use in health monitoring and treatment delivery. In this talk, she will present a paper recently accepted for publication on “Using passively collecting sedentary behavior to predict hospital readmission” (UbiComp, 2016), which present for the first time that a machine-learning model using only passively-sensed behavioral data collected from a wearable off-the-shelf fitness tracker accurately predicts 30-day hospital readmissions for postsurgical cancer patients.

JOEL CHAN
Accelerating innovation with computational analogy: Challenges and new solutions

Ideas from research papers in a different domain can trigger creative breakthroughs. But most papers outside of one’s domain are not useful: the ones that trigger breakthroughs are analogically related to the target domain (e.g., share problems/solutions). To help people find useful papers outside of their domain, we need to build computational systems that can reason by analogy. In this talk, I will argue that the central challenges are twofold: 1) analogical reasoning requires structured representations, and 2) automatically transforming the unstructured text of papers into analogy-ready structured representations is hard. I will then describe our ongoing efforts to create a system that extracts structured representations from scientific papers, leveraging the complementary strengths of machine learning and crowdsourcing.

Joel Chan is a Postdoctoral Research Fellow in the Human-Computer Interaction Institute at Carnegie Mellon University. He received his PhD in Cognitive Psychology from the University of Pittsburgh in 2014. Joel's research integrates cognitive science and human-computer interaction to understand and improve technological support for creative and collective intelligence. His work has been recognized with a Best Paper Award at the ASME Design Theory and Methodology conference, a Best Paper of the year at the Design Studies Journal, and supported by an NSF Doctoral Dissertation Improvement Grant.

IRENE-ANGELICA CHOUNTA
Linking Dialogue with Student Modeling to Create an Enhanced Micro-adaptive Tutoring System

The learning process and its outcomes depend greatly on the social interaction between teachers and students and, in particular, on the proficient and focused use of language through written text or discussions. Our overarching goal in this project is to better understand how to make automated tutorial dialogues effective and adaptive to student characteristics, such as prior knowledge. The specific goal of our current project is to develop an adaptive, natural-language tutoring system, driven by a student model, which can effectively carry out reflective conversations with students after they solve physics problems. Towards this end, we continue our work in identifying linguistic features of tutoring that predict learning gains, and extend it by characterizing the “level of support” to provide to students based on their current level of understanding particular physics concepts and principles, as dynamically captured by the student model.

In this talk, I will describe the features of dialogic discourse underlying “level of support” that we have identified through the analysis of human-to-human tutorial dialogues as well as the construction and application of a coding scheme for th! e characterization of the “level of support”. I will present initial teacher feedback on dialogues that apply these features to coach students at different levels. I will also discuss how this line of research affects the authoring of tutorial dialogues used by an intelligent tutoring system for students who exhibit different levels of understanding.

Since April 2016 Irene-Angelica Chounta has held a post-doctoral researcher position in the Human-Computer Interaction Institute, Carnegie Mellon University. She works with Patricia Albacete, Pamela Jordan, Sandra Katz (Learning Research and Development Center, University of Pittsburgh) and Bruce McLaren (HCII, CMU) on a joint project between CMU and the University of Pittsburgh that aims to develop a student model to support a physics tutorial dialogue system.

 

The emergence of low-cost fabrication technology (most notably 3D printing) has brought us a dawn of making, promising to empower everyday users with the ability to fabricate physical objects of their own design. However, the technology itself is innately oblivious of the physical world—things are, in most cases, assumed to be printed from scratch in isolation from the real world objects they will be attached to and function with.

To bridge this ‘gulf of fabrication', my thesis research focuses on developing fabrication techniques with tool integration to enable users to expressively create designs that can be attached to and function with existing real world objects. Specifically, my work explores techniques that leverage the 3D printing process to create attachments directly over, onto and around existing objects; a design tool further enables people to specify and generate adaptations that can be attached to and mechanically transform existing objects in user-customized ways; a mixed-initiative approach allows people to create functionally valid design, which addresses real world relationships with other objects; finally, by situating the fabrication environment in the real world, a suite of virtual tools would allow users to design, make, assemble, install and test physical objects in situ directly within the context of their usage.

Overall my thesis attains to make fabrication real—innovation in design tools harnesses fabrication technology, enabling things to be made by real people, to address real usage and to function with real objects in the world. 

Thesis Committee:
Scott Hudson (Co-Chair)
Stelian Coros (Co-Chair)
Jodi Forlizzi
Tovi Grossman (Autodesk Research)

Copy of Proposal Document

Tangible interfaces and mixed-reality environments have potential to bring together the advantages of physical and virtual environments to improve children’s learning and enjoyment. However, there are too few controlled experiments that investigate whether interacting with physical objects in the real world accompanied by interactive feedback may actually improve student learning compared to flat-screen interaction. Furthermore, we do not have a sufficient empirical basis for understanding how a mixed-reality environment should be designed to maximize learning and enjoyment for children.

I created EarthShake, a mixed-reality game bridging physical and virtual worlds via a Kinect depth-camera and a specialized computer vision algorithm to help children learn physics. I have conducted three controlled experiments with EarthShake that have identified features that are more and less important to student learning and enjoyment. The first experiment examined the effect of observing physical phenomena and collaboration (pairs versus solo), while the second experiment replicated the effect of observing physical phenomena while also testing whether adding simple physical control, such as shaking a tablet, improves learning and enjoyment. The experiments revealed that observing physical phenomena in the context of a mixed-reality game leads to significantly more learning (5 times more) and enjoyment compared to equivalent screen-only versions, while adding simple physical control or changing group size (solo or pairs) do not have significant effects. Furthermore, gesture analysis provides insight as to why experiencing physical phenomena may enhance learning.

My thesis work further investigates what features of a mixed-reality system yield better learning and enjoyment, especially in the context of limited experimental results from other mixed-reality learning research. Most mixed-reality environments, including tangible interfaces, currently emphasize open-ended exploration and problem solving, and are claimed to be most effective when used in a discovery-learning mode with minimal guidance. I investigated how critical to learning and enjoyment interactive guidance and feedback is, in the context of EarthShake. In a third experiment, I compared the learning and enjoyment outcomes of children interacting with a version of EarthShake that supports guided-discovery, another version that supports exploration in discovery-learning mode, and a version that is a combination of both guided-discovery and exploration. The results of the experiment reveals that Guided-discovery and Combined conditions where children are exposed to the guided discovery activities with the predict-observe-explain cycle with interactive feedback yield better explanation and reasoning. Thus, having guided-discovery in a mixed-reality environment helps with formulating explanation theories in children’s minds. However, the results also suggest that, children are able to activate explanatory theory in action better when the guided discovery activities are combined with exploratory activities in the mixed-reality system. Adding exploration to guided-discovery activities, not only fosters better learning of the balance/physics principles, but also better application of those principles in a hands-on, constructive problem-solving task.

My dissertation contributes to the literatures on the effects of physical observation and mixed-reality interaction on students’ science learning outcomes in learning technologies. Specifically, I have shown that a mixed-reality system (i.e., combining physical and virtual environments) can lead to superior learning and enjoyment outcomes than screen-only alternatives, based on different measures. My work also contributes to the literature of exploration and guided-discovery learning, by demonstrating that having guided-discovery activities in a mixed-reality setting can improve children’s fundamental principle learning by helping them formulate explanations. It also shows that combining an engineering approach with scientific thinking practice (by combining exploration and guided-discovery activities) can lead to better engineering outcomes such as transferring to constructive hands-on activities in the real world. Lastly, my work aims to make a contribution from the design perspective by creating a new mixed-reality educational system that bridges physical and virtual environments to improve children’s learning and enjoyment in a collaborative way, fostering productive dialogue and scientific curiosity in museum and school settings, through an iterative design methodology to ensure effective learning and enjoyment outcomes in these settings.

Thesis Committee:
Kenneth Koedinger (Co-Chair, HCII/Psych)
Scott Hudson (Co-Chair, HCII)
Jessica Hammer (HCII/ETC)
Kevin Crowley (LRDC, University of Pittsburgh)

Copy of Thesis Document

People generate vast quantities of digital information as a product of their interactions with digital systems and with other people. As this information grows in scale and becomes increasingly distributed through different accounts, identities, and services, researchers have studied how best to develop tools to help people manage and derive meaning from it. Looking forward, these issues acquire new complexity when considered in the context of the information that is generated across one’s life or across generations. The long-term lens of a multigenerational timeframe elicits new questions about how people can engage with these heterogeneous collections of information and how future generations will manage and make sense of the information left behind by their ancestors.

My prior work has examined how people perceive the role that systems will play in the long-term availability, management, and interpretation of digital information. This work demonstrates that while people certainly ascribe meaning to aspects of their digital information and believe that there is value held in their largely uncurated digital materials, it is not clear how or if that digital information will be transmitted, interpreted, or maintained by future generations.

Building on that earlier work, my dissertation work investigates how we can develop systems that foster engagement with lifetimes or generations of digital information in ways that are sensitive to how people define and communicate their identity and how they reflect on their life and experiences. In addition, this work highlights the ways in which people engage with memories, artifacts, and experiences of people who have passed away and considers how digital systems and information can support those practices. In so doing, this work contributes a better understanding of how digital systems, and the digital information people create over the course of their lives, intersect with the processes of death, dying, and remembrance.

Thesis Committee:
Jodi Forlizzi (Co-Chair)
Aisling Kelliher (Co-Chair, Virginia Tech)
Laura Dabbish (HCII/Heinz)
Dan Cosley (Cornell University)

What do wearable computers and digital fabrication have in common? They are both readily available but remain difficult to use. In this talk, I will discuss the work in the Future Everyday Technology Research Lab (FETLab) around increasing the speed of interaction with these devices: for mobile devices, from seconds to sub-second speeds, and with fabrication devices from hours to minutes.

Dr. Daniel Ashbrook is an Assistant Professor in the Golisano College of Computing and Information Sciences at the Rochester Institute of Technology. He earned his B.S., M.S., and Ph.D. in Computer Science from the Georgia Institute of Technology and worked at Nokia Research and Samsung before returning to academia. He founded and directs the Future Everday Technology Research Lab (FETLab). His research focuses on new interaction techniques, devices, and applications, most recently for helping non-experts more easily understand and use digital fabrication technology. He also conducts research into non-obtrusive interaction techniques for wearable and mobile computing devices.

Faculty Host: Jen Mankoff

Contact Marian if you plan to attend.

Context-aware computing utilizes information about users and/or their environments in order to provide relevant information and services. To date, however, most context-aware applications only take advantage of contexts that can either be produced on the device they are running on, or on external devices that are known beforehand. While there are many application domains where sharing context is useful and/or necessary, creating these applications is currently difficult because there is no easy way for devices to share information without 1) explicitly directing them to do so, or 2) through some form of advanced user coordination (e.g., sharing credentials and/or IP addresses, installing and running the same software). This makes these techniques useful when the need to share context is known a priori, but impractical for the one time, opportunistic encounters which make up the majority of users’ lives.

To address this problem, this thesis presents the Group Context Framework (GCF), a software framework that allows devices to form groups and share context with minimal prior coordination. GCF lets devices openly discover and request context from each other. The framework then lets devices intelligently and autonomously forms opportunistic groups and work together without requiring either the application developer or the user to know of these devices beforehand. GCF supports use cases where devices only need to share information once or spontaneously. Additionally, the framework provides standardized mechanisms for applications to collect, store, and share context. This lets devices form groups and work together, even when they are performing logically separate tasks (i.e., running different applications).

Through the development of GCF, this thesis identifies the conceptual and software abstractions needed to support opportunistic groups in context-aware applications. As part of our design process, we looked at current context sharing applications, systems, and frameworks, and developed a conceptual model that identifies the most common conditions that cause users/devices to form a group. We then created a framework that supports grouping across this entire model. Through the creation of four prototype systems, we show how the ability to form opportunistic groups of devices can increase users and devices’ access to timely information and services. Finally, we had 20 developers evaluate GCF, and verified that the framework supports a wide range of existing and novel use cases. Collectively, this thesis demonstrates the utility of opportunistic groups in context-aware computing, and highlights the critical challenges that need to be addressed to make opportunistic context sharing practical in real-world settings   

Thesis Committee:
Anind Dey (Chair)
Jen Mankoff
Steven Dow
Saul Greenberg

Copy of Thesis Document

Educational games have become an established paradigm of instructional practice, however, there is still much to be learned about how to design games so that they can be the most beneficial to learners. An important consideration when designing an educational game is whether there is good alignment between its content goals and the instructional behaviors it makes in order to reinforce those goals. What is needed is a better way to define and evaluate this alignment in order to guide the educational game design process. This thesis explores ways to operationalize this concept of alignment and demonstrates an analysis technique that helps educational game designers measure the alignment of both current educational game designs as well as prototypes of future iterations.

In my work thus far, I have explored the use of replay analysis, which analyzes player experience in terms of in-game replay files rather than traditional analytics data, as a means of capturing gameplay experience for the evaluation of alignment between an educational game’s feedback and its stated goals. The majority of this work has been performed in the context of RumbleBlocks, an educational game that teaches basic structural stability and balance concepts to young children. This work has highlighted that RumbleBlocks likely possesses a misalignment in how it teachers the concept of designing for a low center of mass to students. It has also lead to suggestions of design iterations for future implementations of the game. This work has shown that replay analysis can be used to evaluate the alignment of an educational game and suggests future directions.

In the proposed work, I plan to demonstrate an extension of replay analysis that I call Projective Replay Analysis, which uses recorded student replay data in new versions of the game in order to evaluate whether alignment has improved. To do this, I plan to implement two forms of projective replay: Literal replay, which replays past player actions through a new game version exactly as they were originally recorded; and Flexible, which uses prior player actions as training data for AI player models, which then play through a new game version as if they were players. Finally, to assess the validity of this method of game evaluation, I will perform a close-the-loop study with a new population of human play testers to validate whether the conclusions reached through virtual methods correspond to those reached in a normal playtesting situation.

This work will make contributions to the fields of human-computer interaction, by exploring the benefits of limitations of different replay paradigms for the evaluation of interactive systems; learning sciences, by establishing a novel operationalization of alignment for instructional moves; and educational game design, by providing a model for using Projective Replay Analysis to guide the iterative development of an educational game.

Thesis Committee:
Vincent Aleven (Chair)
Jodi Forlizzi
Jessica Hammer (HCII/ETC)
Sharon Carver (Psychology/PIER)
Jesse Schell (ETC/Schell Games)

Copy of Proposal Document

Pages

Subscribe to HCII