Tangible interfaces and mixed-reality environments have potential to bring together the advantages of physical and virtual environments to improve children’s learning and enjoyment. However, there are too few controlled experiments that investigate whether interacting with physical objects in the real world accompanied by interactive feedback may actually improve student learning compared to flat-screen interaction. Furthermore, we do not have a sufficient empirical basis for understanding how a mixed-reality environment should be designed to maximize learning and enjoyment for children.
I created EarthShake, a mixed-reality game bridging physical and virtual worlds via a Kinect depth-camera and a specialized computer vision algorithm to help children learn physics. I have conducted three controlled experiments with EarthShake that have identified features that are more and less important to student learning and enjoyment. The first experiment examined the effect of observing physical phenomena and collaboration (pairs versus solo), while the second experiment replicated the effect of observing physical phenomena while also testing whether adding simple physical control, such as shaking a tablet, improves learning and enjoyment. The experiments revealed that observing physical phenomena in the context of a mixed-reality game leads to significantly more learning (5 times more) and enjoyment compared to equivalent screen-only versions, while adding simple physical control or changing group size (solo or pairs) do not have significant effects. Furthermore, gesture analysis provides insight as to why experiencing physical phenomena may enhance learning.
My thesis work further investigates what features of a mixed-reality system yield better learning and enjoyment, especially in the context of limited experimental results from other mixed-reality learning research. Most mixed-reality environments, including tangible interfaces, currently emphasize open-ended exploration and problem solving, and are claimed to be most effective when used in a discovery-learning mode with minimal guidance. I investigated how critical to learning and enjoyment interactive guidance and feedback is, in the context of EarthShake. In a third experiment, I compared the learning and enjoyment outcomes of children interacting with a version of EarthShake that supports guided-discovery, another version that supports exploration in discovery-learning mode, and a version that is a combination of both guided-discovery and exploration. The results of the experiment reveals that Guided-discovery and Combined conditions where children are exposed to the guided discovery activities with the predict-observe-explain cycle with interactive feedback yield better explanation and reasoning. Thus, having guided-discovery in a mixed-reality environment helps with formulating explanation theories in children’s minds. However, the results also suggest that, children are able to activate explanatory theory in action better when the guided discovery activities are combined with exploratory activities in the mixed-reality system. Adding exploration to guided-discovery activities, not only fosters better learning of the balance/physics principles, but also better application of those principles in a hands-on, constructive problem-solving task.
My dissertation contributes to the literatures on the effects of physical observation and mixed-reality interaction on students’ science learning outcomes in learning technologies. Specifically, I have shown that a mixed-reality system (i.e., combining physical and virtual environments) can lead to superior learning and enjoyment outcomes than screen-only alternatives, based on different measures. My work also contributes to the literature of exploration and guided-discovery learning, by demonstrating that having guided-discovery activities in a mixed-reality setting can improve children’s fundamental principle learning by helping them formulate explanations. It also shows that combining an engineering approach with scientific thinking practice (by combining exploration and guided-discovery activities) can lead to better engineering outcomes such as transferring to constructive hands-on activities in the real world. Lastly, my work aims to make a contribution from the design perspective by creating a new mixed-reality educational system that bridges physical and virtual environments to improve children’s learning and enjoyment in a collaborative way, fostering productive dialogue and scientific curiosity in museum and school settings, through an iterative design methodology to ensure effective learning and enjoyment outcomes in these settings.
Kenneth Koedinger (Co-Chair, HCII/Psych)
Scott Hudson (Co-Chair, HCII)
Jessica Hammer (HCII/ETC)
Kevin Crowley (LRDC, University of Pittsburgh)
People generate vast quantities of digital information as a product of their interactions with digital systems and with other people. As this information grows in scale and becomes increasingly distributed through different accounts, identities, and services, researchers have studied how best to develop tools to help people manage and derive meaning from it. Looking forward, these issues acquire new complexity when considered in the context of the information that is generated across one’s life or across generations. The long-term lens of a multigenerational timeframe elicits new questions about how people can engage with these heterogeneous collections of information and how future generations will manage and make sense of the information left behind by their ancestors.
My prior work has examined how people perceive the role that systems will play in the long-term availability, management, and interpretation of digital information. This work demonstrates that while people certainly ascribe meaning to aspects of their digital information and believe that there is value held in their largely uncurated digital materials, it is not clear how or if that digital information will be transmitted, interpreted, or maintained by future generations.
Building on that earlier work, my dissertation work investigates how we can develop systems that foster engagement with lifetimes or generations of digital information in ways that are sensitive to how people define and communicate their identity and how they reflect on their life and experiences. In addition, this work highlights the ways in which people engage with memories, artifacts, and experiences of people who have passed away and considers how digital systems and information can support those practices. In so doing, this work contributes a better understanding of how digital systems, and the digital information people create over the course of their lives, intersect with the processes of death, dying, and remembrance.
Jodi Forlizzi (Co-Chair)
Aisling Kelliher (Co-Chair, Virginia Tech)
Laura Dabbish (HCII/Heinz)
Dan Cosley (Cornell University)
What do wearable computers and digital fabrication have in common? They are both readily available but remain difficult to use. In this talk, I will discuss the work in the Future Everyday Technology Research Lab (FETLab) around increasing the speed of interaction with these devices: for mobile devices, from seconds to sub-second speeds, and with fabrication devices from hours to minutes.
Dr. Daniel Ashbrook is an Assistant Professor in the Golisano College of Computing and Information Sciences at the Rochester Institute of Technology. He earned his B.S., M.S., and Ph.D. in Computer Science from the Georgia Institute of Technology and worked at Nokia Research and Samsung before returning to academia. He founded and directs the Future Everday Technology Research Lab (FETLab). His research focuses on new interaction techniques, devices, and applications, most recently for helping non-experts more easily understand and use digital fabrication technology. He also conducts research into non-obtrusive interaction techniques for wearable and mobile computing devices.
Faculty Host: Jen Mankoff
Contact Marian if you plan to attend.
Context-aware computing utilizes information about users and/or their environments in order to provide relevant information and services. To date, however, most context-aware applications only take advantage of contexts that can either be produced on the device they are running on, or on external devices that are known beforehand. While there are many application domains where sharing context is useful and/or necessary, creating these applications is currently difficult because there is no easy way for devices to share information without 1) explicitly directing them to do so, or 2) through some form of advanced user coordination (e.g., sharing credentials and/or IP addresses, installing and running the same software). This makes these techniques useful when the need to share context is known a priori, but impractical for the one time, opportunistic encounters which make up the majority of users’ lives.
To address this problem, this thesis presents the Group Context Framework (GCF), a software framework that allows devices to form groups and share context with minimal prior coordination. GCF lets devices openly discover and request context from each other. The framework then lets devices intelligently and autonomously forms opportunistic groups and work together without requiring either the application developer or the user to know of these devices beforehand. GCF supports use cases where devices only need to share information once or spontaneously. Additionally, the framework provides standardized mechanisms for applications to collect, store, and share context. This lets devices form groups and work together, even when they are performing logically separate tasks (i.e., running different applications).
Through the development of GCF, this thesis identifies the conceptual and software abstractions needed to support opportunistic groups in context-aware applications. As part of our design process, we looked at current context sharing applications, systems, and frameworks, and developed a conceptual model that identifies the most common conditions that cause users/devices to form a group. We then created a framework that supports grouping across this entire model. Through the creation of four prototype systems, we show how the ability to form opportunistic groups of devices can increase users and devices’ access to timely information and services. Finally, we had 20 developers evaluate GCF, and verified that the framework supports a wide range of existing and novel use cases. Collectively, this thesis demonstrates the utility of opportunistic groups in context-aware computing, and highlights the critical challenges that need to be addressed to make opportunistic context sharing practical in real-world settings
Anind Dey (Chair)
Educational games have become an established paradigm of instructional practice, however, there is still much to be learned about how to design games so that they can be the most beneficial to learners. An important consideration when designing an educational game is whether there is good alignment between its content goals and the instructional behaviors it makes in order to reinforce those goals. What is needed is a better way to define and evaluate this alignment in order to guide the educational game design process. This thesis explores ways to operationalize this concept of alignment and demonstrates an analysis technique that helps educational game designers measure the alignment of both current educational game designs as well as prototypes of future iterations.
In my work thus far, I have explored the use of replay analysis, which analyzes player experience in terms of in-game replay files rather than traditional analytics data, as a means of capturing gameplay experience for the evaluation of alignment between an educational game’s feedback and its stated goals. The majority of this work has been performed in the context of RumbleBlocks, an educational game that teaches basic structural stability and balance concepts to young children. This work has highlighted that RumbleBlocks likely possesses a misalignment in how it teachers the concept of designing for a low center of mass to students. It has also lead to suggestions of design iterations for future implementations of the game. This work has shown that replay analysis can be used to evaluate the alignment of an educational game and suggests future directions.
In the proposed work, I plan to demonstrate an extension of replay analysis that I call Projective Replay Analysis, which uses recorded student replay data in new versions of the game in order to evaluate whether alignment has improved. To do this, I plan to implement two forms of projective replay: Literal replay, which replays past player actions through a new game version exactly as they were originally recorded; and Flexible, which uses prior player actions as training data for AI player models, which then play through a new game version as if they were players. Finally, to assess the validity of this method of game evaluation, I will perform a close-the-loop study with a new population of human play testers to validate whether the conclusions reached through virtual methods correspond to those reached in a normal playtesting situation.
This work will make contributions to the fields of human-computer interaction, by exploring the benefits of limitations of different replay paradigms for the evaluation of interactive systems; learning sciences, by establishing a novel operationalization of alignment for instructional moves; and educational game design, by providing a model for using Projective Replay Analysis to guide the iterative development of an educational game.
Vincent Aleven (Chair)
Jessica Hammer (HCII/ETC)
Sharon Carver (Psychology/PIER)
Jesse Schell (ETC/Schell Games)
Please join in at the final presentations for the BHCI undergraduate's capstones project, including:
1:00 PM — Collaboration U.
— Modules of an OLI course to teach collaboration skills
1:30 PM — Threat
— A dashboard to help security analysts cope with a deluge of intelligence reports
2:00 PM — Earthlapse
— A museum exhibit to show the effects of global warming through satellite imagery
2:30 PM — Artbytes
— A mobile app to build virtual reality art exhibits
3:00 PM — Virtual Agents
— Ways for a virtual agent to collaborate with kids over 3D objects
3:30 PM — Steelers
— An application to teach and test players learning a dynamic football playbook
How do people living in the midst of war use social media, and what can we learn from them to design the next generation of news technologies? In this presentation, I start by narrating how residents of cities afflicted by the Mexican Drug War use social media to circumvent censorship imposed by powerful drug cartels. I show how people have created effective alert networks to generate real-time reports of violent events, and how some individuals have emerged as a new type of “war correspondent.” I end by presenting a number of civic tech systems we have developed inspired by this research.
Andrés Monroy-Hernández is a researcher at Microsoft Research, and an affiliate professor at the University of Washington. His work focuses on the design and study of social computing systems for large scale collaboration. His research has received best paper awards at CHI, CSCW, ICWSM, and HCOMP, recognized at Ars Electronica, and featured in The New York Times, The Guardian, NPR, and Wired. Andrés was named one of the TR35 Innovators by the MIT Technology Review (Spanish), and one of CNET's influential Latinos in Tech. He holds a Ph.D. from the MIT Media Lab, where he led the creation of the Scratch Online Community website.
In his current research project, “Primordial," Mickey McManus and his team are exploring the impact on design when three inevitable technology trends converge. Often called the “Internet of Things,” pervasive computing is a game-changer that's on a collision course with two complementary trends—digital manufacturing and machine learning.
In 2012, Mickey co-authored one of the essential field guides to the era of pervasive computing in his book, Trillions. He believes that these three trends, taken together, give us the ability to shift to an entirely new set of design and business paradigms for the first time in our history. The way we design for things, when they begin to wake up, is uncharted territory. If we don’t take into account our connected future and continue to design for disconnected things, we will design our way into irrelevance. The challenge we as designers face is how we surf these trends, what we do about them, and how the act of designing "things" will change. Those of us that figure it out sooner rather than later will have an unfair advantage, while others will be reactionary and surprised at each turn of the screw.
Mickey and his team hope some of us can not only survive the riptide, but also harness its power for good. Please join Mickey in a discussion about Primordial, ecological design and the nature of things.
Mickey McManus is a research fellow at Autodesk in the Office of the CTO, and Principal & Chairman of the board at MAYA Design, a design consultancy and innovation lab. He's a pioneer in the fields of pervasive computing, collaborative innovation, human-centered design and education. Mickey holds nine patents in the area of connected products, vehicles and services, and spearheaded the launch of MAYA's Pervasive Computing practice to help companies kick-start innovation around business challenges in a vastly connected world - where computing devices outnumber people.
In 2012, he coauthored the book Trillions: Thriving in the Emerging Information Ecology (Wiley) — a field guide to the future, when computing will be freely accessible in the ambient environment. Trillions was awarded the Axiom Gold Award in 2013 for best business book about technology and the 2013 Carnegie Science Award in the Science Communicator category.
Mickey speaks frequently about pervasive computing, design, and business innovation. He has lectured at Carnegie Mellon University, Illinois Institute of Technology, LUMA Institute, MIT, Princeton, University of Illinois, UC Berkeley, and UCLA. His work has been published in Bloomberg, Businessweek, Fortune, Fast Company, Wall Street Journall, and The Harvard Business Review.
Faculty Host: Jim Morris
Modern tourists visiting new cities are not content to simply stay in a hotel downtown and see famous sights. They want to get out into the neighborhoods of the city that they are visiting and understand more of the city’s culture and everyday life. However, current guides remained focused on statistics and points, so tourists are unable to understand and find neighborhoods they would enjoy.
I propose to build neighborhood guides based on social media posts to help people understand neighborhoods. These guides will have two parts: first, they will allow comparison between neighborhoods in a new city and neighborhoods they know; second, they will add context so travelers can understand why the neighborhoods are similar. These will enable people to understand how different neighborhoods feel, and contribute to our understanding of the city as a whole. Their effectiveness will be evaluated through quantitative studies of the comparisons and qualitative studies of the site as a whole.
This thesis will provide three research contributions. First, it will provide evidence that social media can help us understand cities better than simple demographics. Second, it will show how well social media reflects neighborhoods, and what aspects are best represented. Finally, it will contribute to our knowledge of tourist information search by the development of a five dimensional model.
Jason Hong (Chair)
Judd Antin (AirBnB)
Crowdsourcing is increasingly important to software development today, through the asking and answering of questions on StackOverflow, competitions organized for design and development in communities such as TopCoder, and through freelancing enabled through online labor markets. In this talk, I’ll first explore how crowdsourcing is bringing software developers together in new ways to reshape how developers work, play, and learn.
One opportunity such models offer is parallelism, as decomposition into pieces enables work to be distributed to the crowd and completed more quickly. One might ask, just how far can complex, knowledge intensive work be decomposed? Could a crowd of developers build software entirely through self-contained ten-minute contributions? I will report on some of the work we have done to address this question. A core property of software work is its interdependence, bringing new challenges for scaling microtask crowdsourcing to domains where more explicit coordination between workers is required. Our work also raises fundamental questions about the nature of knowledge and context in software development, offering a new lens for investigating modularity in software development teams.
Thomas LaToza is an Assistant Professor of Computer Science in the Volgenau School of Engineering at George Mason University. He works at the intersection of software engineering and human computer interaction, investigating how humans interact with code and designing new ways to build software. He has served on various program committees and is on the Review Board of the Empirical Software Engineering Journal. He currently serves as guest editor of the IEEE Software Special Issue on Crowdsourcing for Software Engineering, serves as co-chair of the Seventh Workshop on the Evaluation and Usability of Programming Languages and Tools, and serves as co-chair of the Third International Workshop on Crowdsourcing in Software Engineering.
His work is funded in part through a $1.4M grant from the National Science Foundation on Crowd Programming. He holds B.S. degrees in psychology and computer science from the University of Illinois at Urbana-Champaign and a Ph.D. in software engineering from Carnegie Mellon University.