Fostering Curiosity Through Peer Support in Collaborative Science Learning

Curiosity is a key motivational factor in learning. This is, however, often neglected in many classrooms, especially larger and inner-city classrooms, which have instead become very test-oriented. This project focuses on designing learning technologies that foster and maintain curiosity, exploration and self-efficacy in scientific inquiry. In particular, we are interested in understanding social factors among peers that evoke students’ desire for new knowledge, and encourage knowledge seeking through hands-on and collaborative activities. In this talk, I will present the theoretical foundation of curiosity, followed by our on-going work of human-human behavior analysis in small group science learning, and discuss how the theoretical and empirical work lead toward the development of a computational model of curiosity that enables an embodied conversational virtual peer to sense and scaffold curiosity for elementary and middle school students.

Insights from Personalized Learning Product Efficacy Pilots in K-12 Education

With recent advances in technology, adoption of personalized learning practices has greatly increased in K-12 education across the United States. Personalized learning shows great promise in leveling the playing field for underserved students by offering instruction tailored to their specific competencies (Pane et al., 2015). Yet, schools face significant hurdles in choosing educational technology products well aligned with their curricular goals and objectives from the multitude of options available (Morrison et al., 2014). Existing evidence about educational technology product effectiveness is scarce and school district leaders struggle to access, validate, and apply findings to their unique settings. In this short seminar, I will present some key findings from product efficacy pilot studies of educational technology products conducted at two school districts in the Pittsburgh area. We used a mixed methods approach to evaluate product efficacy on four key dimensions — student learning, student engagement, teacher support, and teacher satisfaction. I will highlight the contrasts between the two pilots, and describe key factors that led to a successful pilot.

Supporting Orientation and Object Recognition for Blind People

In the field of assistive technology, large scale user studies are hindered by the fact that potential participants are ge-ographically sparse and longitudinal studies are often time consuming. In our work, we rely on remote usage data to perform large scale and long duration behavior analysis on users of iMove, a mobile app that supports the ori-entation of blind people. Our analysis provides insights into iMove’s user base and can inform decisions for tailoring the app to diverse user groups, developing future improvements of the software, or guiding the design process of similar assistive tools. Another important factor for independent living of blind people is object identification. Blind people often need to identify objects around them, from packages of food to items of clothing. We explore personal object recognizers, where blind people train a mobile application with a few snapshots of objects of interest and provide custom labels. We adopt transfer learning with a deep learning system for userdefined multi-label k-instance classification. Experiments with blind participants demonstrate the feasibility of our approach, which reaches accuracies over 90% for some participants.

Speaker Bios

Zhen Bai is a post-doctoral fellow at the ArticuLab. She leads the Sensing Curiosity in Play and Responding (SCIPR) project, which focuses on exploring the design space of playful learning environments that foster curiosity, exploration and self-efficacy for science education. Zhen is passionate to design innovative interfaces that augment our cognitive, emotional and social experiences in a playful and accessible way. Her research interests include augmented reality, tangible interfaces, design for children, developmental psychology, education, and computer-supported collaborative work. She received her Ph.D. in Computer Science from the Graphics & Interaction Group at the University of Cambridge in 2015. Her Ph.D. research focused on designing augmented and tangible interfaces that support symbolic play for young children with and without autism spectrum condition.

Soniya Gadgil earned her doctorate in cognitive psychology with a focus on learning and higher order cognition from University of Pittsburgh in 2014. Her research focuses on applying cognitive science principles to the design, development, implementation, and assessment of educational technology products. In a current role as post-doctoral researcher at the Learning Media Design Center, Soniya supports three school districts in the Pittsburgh area in conducting product efficacy pilots of educational technologies, and implementing rapid cycle feedback loops with developers of said technologies.

Hernisa Kacorri is a Postdoctoral Fellow at the Human-Computer Interaction Institute at Carnegie Mellon University. As a member of the Cognitive Assistance Lab she works with Chieko Asakawa, Kris Kitani, and Jeffrey Bigham to help people with visual impairment understand the surrounding world. She recently received her Ph.D. in Computer Science from the Graduate Center CUNY, as a member of the Linguistic and Assistive Technologies Lab at CUNY and RIT, advised by Matt Huenerfauth. Her dissertation focused on developing mathematical models of human facial expressions for synthesizing animations of American Sign Language that are linguistically accurate and easy to understand. As part of the emerging field of human-data interaction, her work lies at the intersection of accessibility, computational linguistics, and applied machine learning. Her research was supported by NSF, CUNY Science Fellowship, and Mina Rees Dissertation Fellowship in the Sciences. During her Ph.D. Hernisa also visited, as a research intern, the Accessibility Research Group at IBM Research – Tokyo (2013) and the Data Science and Technology Group at Lawrence Berkeley National Lab (2015).

Seminar Video

We will examine the gaming industry and how humans, computers and systems interact, drawing from her background in games, theatre and entertainment technology.

Brooke White is the Senior Director of UX Research for Yahoo for all consumer products as well as advertising platforms and services. Her previous position was Senior Manager, Games User Research for Disney Interactive. In fact, Brooke started and led user research practices for three different companies: Yahoo, Disney and Volition/THQ. Brooke has decades of experience spanning research, marketing and production in desktop, console and mobile games.

Faculty Hosts: Justine Cassell, Brad Myers

Reception to Follow

Over the past five years, my group—and probably many of you—have experienced a dramatically-increased ability to do Design at Large: creating research that is widely used by real people and learning a ton from the experience. One shift that happens when we move from designing artifacts in the lab to designing experiences at large is that inevitably, what we end up studying are complex sociotechnical systems. A lot of the behavior is emergent, and sometimes completely unexpected. The successes in this new world are tremendously exciting, but like all creative endeavors, there are lots of failures. One contributing factor is that designers often receive guidance that’s based on faith rather than insight. We may be able to do better by building up a body of knowledge through design at large. In this talk, I’ll try and distill some insights into this shift. I’ll draw on examples from research from my group and others, as well as my students and colleagues experiences with startups.


Speaker Bio
Scott Klemmer is an Associate Professor of Cognitive Science and Computer Science & Engineering at UC San Diego, where he is a co-founder and co-director of the Design Lab. He previously served as Associate Professor of Computer Science at Stanford, where he co-directed the HCI Group, held the Bredt Faculty Scholar chair, and was a founding participant in the d.school. Scott has a dual BA in Art-Semiotics and Computer Science from Brown (with Graphic Design work at RISD), and a PhD in CS from Berkeley. His former graduate students are leading professors (at Berkeley, CMU, UCSD, & UIUC), researchers (Google & Adobe), founders (including Instagram & Pulse), social entrepreneurs, and engineers. He helped introduce peer assessment to online education, and created the first such online course. More than 200,000 have signed up for his interaction design class & specialization.

He has been awarded the Katayanagi Emerging Leadership Prize, Sloan Fellowship, NSF CAREER award, and Microsoft Research New Faculty Fellowship. Nine of his papers were awarded best paper or honorable mention at top HCI venues. He is on the editorial board of HCI and TOCHI; was program co-chair for UIST, the CHI systems area, and HCIC; and serves on the Learning at Scale steering committee. He advises university design programs globally. Organizations worldwide use his group’s open-source design tools and curricula.

View the LiveStream

Multiple social planes (i.e., individual, collaborative, whole class) are often used within
the same learning activity in the classroom. However, the presence of multiple social planes does not automatically guarantee a more productive learning experience for students. It is important to adapt to student needs in a way that aligns students’ social interactions with the goals of the learning phase. In my work, I focus on when it is productive for students to be working individually or collaboratively and how this is best supported in the classroom. Although some current computer supported collaborative learning systems support using collaborative and individual learning, most are non-adaptive and require advance specification of collaborative and individual phases for the entire class. To orchestrate the adaptive use of collaborative and individual learning within the classroom, both the student goals and the teacher goals need to be supported. Current orchestration systems support multiple social planes at the class level and do not support teachers and students in an adaptive classroom where students may encounter collaborative and individual phases of the learning activity at different times.

In my prior work, I studied the complementary strengths of collaborative and individual learning, which is still an open question. To investigate this question, I ran multiple studies both in and out of the classroom with elementary school students working on a fractions intelligent tutoring system (ITS) that was extended to support collaborative problem solving. My prior work showed that elementary school students could be effectively supported through collaborative ITSs and that a combination of collaborative and individual learning may be more beneficial to students than either one alone. These results suggest that there is benefit to adapting collaborative and individual learning to student characteristics.

In order to encourage classroom adoption of a system that supports the adaptation of collaborative and individual learning to student characteristics, support is needed for teachers in addition to students. My proposed work focuses on how I can provide teacher support in terms of real-time classroom management that can align with teacher goals through an orchestration system that allows for the adaptation of collaborative and individual learning to student needs. Within an orchestration system, there is a delicate balance between system automation (which may ignore the teacher goals) and teacher autonomy (which may cause too much cognitive load). Specifically, for my proposed work, I will conduct a requirements analysis with teachers to develop an understanding of the support that teachers need from an orchestration system to support their goals while minimizing cognitive load. Additionally, I will develop and test a prototype of an orchestration system that supports adaptive use of collaborative and individual learning. This work will make a contribution to existing literature by advancing our knowledge of how collaborative and individual learning can be adapted to support student learning. In addition, the work will make a contribution to the literature through helping teachers with an orchestration system that supports an adaptive use of collaborative and individual learning.

Thesis Proposal Committee:
Vincent Aleven (Co-Chair)
Nikol Rummel, (Co-Chair, Psychology/HCII)
John Zimmerman
Pierre Dillenbourg (EPFL)

Copy of Proposal Document

Embodied empathic agents are characters that, by their actions and behaviours, are able to show empathy (or not) for other characters; and/or characters that, by their appearance, situation, and behaviour, are able to trigger empathic reactions in the user.

In this talk we discuss the theory behind Embodied Empathic Agents ­whether graphical or robotic­ what we can do now, and what open research questions remain. What are the key theoretical and technological advances already made, and which are still needed? Are Empathic Agents good in tutorial applications, or are there problematic issues?  How do we know?

We use the recent EMOTE project which involved building an empathic robot tutor as a case study in exploring these issues.

Professor Ruth Aylett researches affective agent architectures, intelligent graphical characters and social robots among other topics. She has led and taken part in a succession of mostly European-funded projects in these areas, with a number of educational applications, such as the FearNot! anti-bullying system, the Traveller system for teaching inter-cultural sensitive, and the EMOTE empathic robot tutor. She has implemented a number of cognitively-inspired architectures and explores middle-out approaches in which goal-directed and sensor-directed components can work together to combine direction with responsiveness in intelligent agents. She has around 250 per-reviewed publications in these areas.

Social capital is a construct describing the resources one can draw from social network connections. This talk will describe social capital as a concept, and the ways it has been described, operationalized, and designed in HCI research. Using Paul Resnick’s classic “Bowling Together” article as a frame, we will look at how HCI researchers have described the affordances of computing systems that might promote new forms of social capital, and then dive deeply into recent research by Lampe and colleagues on the use of Facebook for building social capital through active engagement.

Cliff Lampe is an Associate Professor in the School of Information at the University of Michigan. His research examines the positive outcomes of interaction in online communities, ranging from development of interpersonal relationships, to nonprofit collective action, to new forms of civic engagement. His work on Facebook and social capital has been heavily cited in a range of disciplines. Dr. Lampe serves as the Vice President of Publications for SIGCHI, the Technical Program Chair for CHI2017, and the Steering Committee Chair for the CSCW community.

Faculty Host: Robert Kraut

I. Michael Eagle
Predicting Individual Differences for Learner Modeling in Intelligent Tutors from Previous Learner Activities

This study examines how accurately individual student differences in learning can be predicted from prior student learning activities. Bayesian Knowledge Tracing (BKT) predicts learner performance well and has often been employed to implement cognitive mastery. Standard BKT individualizes parameter estimates for knowledge components, but not for learners. Studies have shown that individualizing parameters for learners improves the quality of BKT fits and can lead to very different (and potentially better) practice recommendations. These studies typically derive best-fitting individualized learner parameters from learner performance in existing data logs, making the methods difficult to deploy in actual tutor use. In this work, we examine how well BKT parameters in a tutor lesson can be individualized based on learners’ prior performance in reading instructional text, taking a pretest, and completing an earlier tutor lesson. We find that best-fitting individual difference estimates do not directly transfer well from one tutor lesson to another, but that predictive models incorporating variables extracted from prior reading, pretest and tutor activities perform well, when compared to a standard BKT model and a model with best-fitting individualized parameter estimates.

II. Swarup Kumar Sahoo
Managing Privacy of Mobile Applications

Mobile applications are very privacy-invasive today. Apps request lot of private and sensitive data without properly informing the users about how it will be used. We are building various tools and techniques as part of privacy-enhanced android to give users full control over their private data. One main focus of our work is about making purposes of private data use an essential part of apps and use them to detect/prevent potential privacy issues. Our current approach is to have the apps explicitly declare purpose of why sensitive data is being used and then use static/dynamic program analysis and machine learning techniques to check/enforce purpose of private data. We are also building new kind of user interfaces leveraging purposes and using crowdsourcing techniques to help users make informed decisions about configuring their privacy preferences for various apps.

III. Ran Liu
Bridging the Gap Between Educational Data Mining and Improved Classroom Instruction

The increasing use of educational technologies in classrooms is producing vast amounts of process data that capture rich information about learning as it unfolds. The field of Educational Data Mining (EDM) has made great progress in using the information present in log data to build models that improve instruction and advance the science of learning. However, there have been some limitations. The data used to produce such models has been frequently limited to the actions that education technologies themselves can log. A major challenge in incorporating more contextually-rich data streams into models of learning is collecting and integrating data from different sources and at different grain sizes. In my first talk, I will present methodological advances we have made in automating the integration of log data with additional multi-modal (e.g., audio, screen video, webcam video) data streams. I will also show a case study of how including the multi-modal streams in data analysis can improve the predictive fit of student models and yield important pedagogical implications. More broadly, this work represents an advancement in integrating rich qualitative details of students’ learning contexts into the quantitative approaches of EDM research. Another limitation of EDM research thus far is that findings remain largely theoretical with respect to their impact on learning outcomes and efficiency. The most important, rigorous, and firmly grounded evaluation of a data-driven discovery is whether it leads to modifications to education that produce better student learning. Such an evaluation has been referred to as "closing the loop" (e.g., Koedinger et al., 2013), as it completes cycle of system design, deployment, data analysis, and discovery leading back to design. In my second talk, I present new results that “close the loop” (via a classroom-implemented randomized controlled trial) on a data-driven, machine-automated method of improving knowledge component models.

Using Passively Collected Sedentary Behavior to Predict Hospital Readmission

Hospital readmissions are a major problem facing health care systems today, costing Medicare alone US$26 billion each year. Being readmitted is associated with significantly shorter survival, and is often preventable. Predictors of readmission are still not well understood, particularly those under the patient’s control: behavioral risk factors. Our work evaluates the ability of behavioral risk factors, specifically Fitbit-assessed behavior, to predict readmission for 25 postsurgical cancer inpatients. Our results show that sum of steps, maximum sedentary bouts, frequency and low breaks in sedentary times during waking hours are strong predictors of readmission. We built two models for predicting readmissions: Steps-only and Behavioral model that adds information about sedentary behaviors. The Behavioral model (88.3%) outperforms the Steps-only model (67.1%), illustrating the value of passively collected information about sedentary behaviors. Indeed, passive monitoring of behavior data, i.e., mobility, after major surgery creates an opportunity for early risk assessment and timely interventions.

Dr. Sangwon Bae is a postdoctoral researcher in the Human–Computer Interaction Institute at Carnegie Mellon University. She earned her Ph.D. in Cognitive Science and Engineering from the Yonsei University and worked at SK and Samsung before returning to academia. Her research has focused on using smartphones and wearable trackers to understand human perception and to develop models of behavior based on mobile systems. The goal of her work is to examine the feasibility and acceptability of collecting continuously-sensed contextual information and active patient-reported symptom reports and to use these types of information to develop algorithms to accurately predict behavior for use in health monitoring and treatment delivery. In this talk, she will present a paper recently accepted for publication on “Using passively collecting sedentary behavior to predict hospital readmission” (UbiComp, 2016), which present for the first time that a machine-learning model using only passively-sensed behavioral data collected from a wearable off-the-shelf fitness tracker accurately predicts 30-day hospital readmissions for postsurgical cancer patients.

Accelerating innovation with computational analogy: Challenges and new solutions

Ideas from research papers in a different domain can trigger creative breakthroughs. But most papers outside of one’s domain are not useful: the ones that trigger breakthroughs are analogically related to the target domain (e.g., share problems/solutions). To help people find useful papers outside of their domain, we need to build computational systems that can reason by analogy. In this talk, I will argue that the central challenges are twofold: 1) analogical reasoning requires structured representations, and 2) automatically transforming the unstructured text of papers into analogy-ready structured representations is hard. I will then describe our ongoing efforts to create a system that extracts structured representations from scientific papers, leveraging the complementary strengths of machine learning and crowdsourcing.

Joel Chan is a Postdoctoral Research Fellow in the Human-Computer Interaction Institute at Carnegie Mellon University. He received his PhD in Cognitive Psychology from the University of Pittsburgh in 2014. Joel's research integrates cognitive science and human-computer interaction to understand and improve technological support for creative and collective intelligence. His work has been recognized with a Best Paper Award at the ASME Design Theory and Methodology conference, a Best Paper of the year at the Design Studies Journal, and supported by an NSF Doctoral Dissertation Improvement Grant.

Linking Dialogue with Student Modeling to Create an Enhanced Micro-adaptive Tutoring System

The learning process and its outcomes depend greatly on the social interaction between teachers and students and, in particular, on the proficient and focused use of language through written text or discussions. Our overarching goal in this project is to better understand how to make automated tutorial dialogues effective and adaptive to student characteristics, such as prior knowledge. The specific goal of our current project is to develop an adaptive, natural-language tutoring system, driven by a student model, which can effectively carry out reflective conversations with students after they solve physics problems. Towards this end, we continue our work in identifying linguistic features of tutoring that predict learning gains, and extend it by characterizing the “level of support” to provide to students based on their current level of understanding particular physics concepts and principles, as dynamically captured by the student model.

In this talk, I will describe the features of dialogic discourse underlying “level of support” that we have identified through the analysis of human-to-human tutorial dialogues as well as the construction and application of a coding scheme for th! e characterization of the “level of support”. I will present initial teacher feedback on dialogues that apply these features to coach students at different levels. I will also discuss how this line of research affects the authoring of tutorial dialogues used by an intelligent tutoring system for students who exhibit different levels of understanding.

Since April 2016 Irene-Angelica Chounta has held a post-doctoral researcher position in the Human-Computer Interaction Institute, Carnegie Mellon University. She works with Patricia Albacete, Pamela Jordan, Sandra Katz (Learning Research and Development Center, University of Pittsburgh) and Bruce McLaren (HCII, CMU) on a joint project between CMU and the University of Pittsburgh that aims to develop a student model to support a physics tutorial dialogue system.


The emergence of low-cost fabrication technology (most notably 3D printing) has brought us a dawn of making, promising to empower everyday users with the ability to fabricate physical objects of their own design. However, the technology itself is innately oblivious of the physical world—things are, in most cases, assumed to be printed from scratch in isolation from the real world objects they will be attached to and function with.

To bridge this ‘gulf of fabrication', my thesis research focuses on developing fabrication techniques with tool integration to enable users to expressively create designs that can be attached to and function with existing real world objects. Specifically, my work explores techniques that leverage the 3D printing process to create attachments directly over, onto and around existing objects; a design tool further enables people to specify and generate adaptations that can be attached to and mechanically transform existing objects in user-customized ways; a mixed-initiative approach allows people to create functionally valid design, which addresses real world relationships with other objects; finally, by situating the fabrication environment in the real world, a suite of virtual tools would allow users to design, make, assemble, install and test physical objects in situ directly within the context of their usage.

Overall my thesis attains to make fabrication real—innovation in design tools harnesses fabrication technology, enabling things to be made by real people, to address real usage and to function with real objects in the world. 

Thesis Committee:
Scott Hudson (Co-Chair)
Stelian Coros (Co-Chair)
Jodi Forlizzi
Tovi Grossman (Autodesk Research)

Copy of Proposal Document

Tangible interfaces and mixed-reality environments have potential to bring together the advantages of physical and virtual environments to improve children’s learning and enjoyment. However, there are too few controlled experiments that investigate whether interacting with physical objects in the real world accompanied by interactive feedback may actually improve student learning compared to flat-screen interaction. Furthermore, we do not have a sufficient empirical basis for understanding how a mixed-reality environment should be designed to maximize learning and enjoyment for children.

I created EarthShake, a mixed-reality game bridging physical and virtual worlds via a Kinect depth-camera and a specialized computer vision algorithm to help children learn physics. I have conducted three controlled experiments with EarthShake that have identified features that are more and less important to student learning and enjoyment. The first experiment examined the effect of observing physical phenomena and collaboration (pairs versus solo), while the second experiment replicated the effect of observing physical phenomena while also testing whether adding simple physical control, such as shaking a tablet, improves learning and enjoyment. The experiments revealed that observing physical phenomena in the context of a mixed-reality game leads to significantly more learning (5 times more) and enjoyment compared to equivalent screen-only versions, while adding simple physical control or changing group size (solo or pairs) do not have significant effects. Furthermore, gesture analysis provides insight as to why experiencing physical phenomena may enhance learning.

My thesis work further investigates what features of a mixed-reality system yield better learning and enjoyment, especially in the context of limited experimental results from other mixed-reality learning research. Most mixed-reality environments, including tangible interfaces, currently emphasize open-ended exploration and problem solving, and are claimed to be most effective when used in a discovery-learning mode with minimal guidance. I investigated how critical to learning and enjoyment interactive guidance and feedback is, in the context of EarthShake. In a third experiment, I compared the learning and enjoyment outcomes of children interacting with a version of EarthShake that supports guided-discovery, another version that supports exploration in discovery-learning mode, and a version that is a combination of both guided-discovery and exploration. The results of the experiment reveals that Guided-discovery and Combined conditions where children are exposed to the guided discovery activities with the predict-observe-explain cycle with interactive feedback yield better explanation and reasoning. Thus, having guided-discovery in a mixed-reality environment helps with formulating explanation theories in children’s minds. However, the results also suggest that, children are able to activate explanatory theory in action better when the guided discovery activities are combined with exploratory activities in the mixed-reality system. Adding exploration to guided-discovery activities, not only fosters better learning of the balance/physics principles, but also better application of those principles in a hands-on, constructive problem-solving task.

My dissertation contributes to the literatures on the effects of physical observation and mixed-reality interaction on students’ science learning outcomes in learning technologies. Specifically, I have shown that a mixed-reality system (i.e., combining physical and virtual environments) can lead to superior learning and enjoyment outcomes than screen-only alternatives, based on different measures. My work also contributes to the literature of exploration and guided-discovery learning, by demonstrating that having guided-discovery activities in a mixed-reality setting can improve children’s fundamental principle learning by helping them formulate explanations. It also shows that combining an engineering approach with scientific thinking practice (by combining exploration and guided-discovery activities) can lead to better engineering outcomes such as transferring to constructive hands-on activities in the real world. Lastly, my work aims to make a contribution from the design perspective by creating a new mixed-reality educational system that bridges physical and virtual environments to improve children’s learning and enjoyment in a collaborative way, fostering productive dialogue and scientific curiosity in museum and school settings, through an iterative design methodology to ensure effective learning and enjoyment outcomes in these settings.

Thesis Committee:
Kenneth Koedinger (Co-Chair, HCII/Psych)
Scott Hudson (Co-Chair, HCII)
Jessica Hammer (HCII/ETC)
Kevin Crowley (LRDC, University of Pittsburgh)

Copy of Thesis Document


Subscribe to HCII