!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> Carolyn Penstein Rose

Carolyn Penstein Rosť


US Citizen
Language Technologies Institute/Human-Computer Interaction Institute
Newell Simon Hall 4531
Carnegie Mellon University
Pittsburgh, PA 15213

E-mail: cprose@cs.cmu.edu
Phone: (412) 268-7130
Fax: (412) 268-6298
Projects:
http://www.cs.cmu.edu/~cprose/Projects.html
Projects: http://www.cs.cmu.edu/~cprose/pubweb/Publications.html
Teaching: http://www.cs.cmu.edu/~cprose/Teaching.html
Full CV: http://www.cs.cmu.edu/~cprose/Rose2007Vita.doc

Last Updated: April 12, 2007

Education

Ph.D., Language and Information Technologies, Carnegie Mellon University, December 1997. Thesis advisor: Lori S. Levin
M.S., Computational Linguistics, Carnegie Mellon University, May, 1994.
B.S., Information and Computer Science (Magna Cum Laude), University of California at Irvine, June 1992.

Position

[2003-present] Research Computer Scientist, Language Technologies Institute and Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University
[1997- 2003] Research Associate, Learning Research and Development Center, University of Pittsburgh. Project coordinator in Natural Language Tutoring Group
[1994-1997] Teaching Assistant, Computational Linguistics Program, Carnegie Mellon University.
[Summer 1993] Summer Research Internship, Apple Computer, San Josť, CA.
[1992-1994] Research Assistant, Center for Machine Translation, Carnegie Mellon University.
[Summer 1991] Research Internship, Minority Summer Research Internship Program, UC Irvine.
[1990-1992 ] Honors Research, University of California at Irvine.

Statement of Career Goals

Overview

My primary research objective is to develop and apply advanced interactive technology to enable effective computer based and computer supported instruction. A particular focus of my research is in exploring the role of explanation and language communication in learning. Thus, one major thrust of my research is in developing and applying language technology to the problem of eliciting, responding to, and automatically analyzing student verbal behavior. However, many of the underlying HCI issues, such as influencing student expectations, motivation, and learning orientation, transcend the specific input modality. This research program involves four primary foci: (1) controlled experimentation and analysis of human tutoring, collaborative learning, and computer tutoring to explore the mechanisms by which effective instruction is accomplished, (2) controlled experimentation and analysis of student interactions with human tutors, peer learners, and computer tutors in order to explore the HCI issues that affect student behavior and motivational orientations, (3) basic research in language technology to enable, facilitate, or study interactions in natural language in learning environments either with computer agents, between humans, or a combination of the two, and finally (4) development of easy-to-use tools for building language interaction interfaces and tutorial environments more generally.

A Historical Perspective

Although my long term goal was always to work in the area of intelligent tutoring and tutorial dialogue, during graduate school I focused on the problem of robust natural language interpretation. I was awarded my Ph.D. in 1997 from the Language Technologies Institute here at Carnegie Mellon University. My dissertation research focused on an approach for recovering from interpretation failures resulting from insufficient knowledge source coverage and extragrammatical language (Rose, 1997; Rose & Levin, 1998; Rose, 1999). I always had an affinity for hybrid knowledge based/machine learning approaches (Rose & Waibel, 1997; Rose & Lavie, 1997). This work was conducted in the context of a multi-lingual speech-to-speech machine translation project (Woszczyna et al., 1993; Suhm et al., 1994). That context provided a challenging environment in which to explore issues related to robust and efficient natural language understanding. Another focus of my work was on computational modeling of dialogue (Rose et al., 1995; Qu et al., 1997).

Immediately upon finishing my dissertation research, I accepted a position as a Postdoctoral Research Associate at the Learning Research and Development Center (LRDC), where I worked most closely with Johanna Moore, Kurt VanLehn, and Diane Litman. There I played a very active role in the CIRCLE Center, an NSF funded center pursuing research questions related to the development of tutorial dialogue technology (Rose et al., 1999; Freedman et al., 2000; Jordan et al., 2001; Vanlehn et al., 2002). An important part of this work was continued research in the area robust language understanding (Rose, 2000; Rose & Lavie, 2001; Rose et al., 2002; Rose et al., 2003a; Rose & Hall, 2004; Lavie & Rose, 2004), in addition to research involving analysis of human tutoring interactions (Rose et al., 2001b; Rose et al., 2003b,c; Litman et al., 2004; VanLehn et al., submitted) and evaluation of implemented tutorial dialogue systems (Rose et al., 2001a; Litman et al., 2004). A recently accepted journal article (Rose & VanLehn, 2005) and a journal article in preparation (Rose, Siler, Torrey, & VanLehn, in preparation) provide an overview of my work from those years at LRDC leading into the work I am doing now.

My Current Work

In October of 2003 I accepted a position as a Research Scientist with a 50%/50% joint appointment in the Language Technologies Institute and the Human-Computer Interaction Institute. The series of studies I ran during my time at LRDC convinced me of the naivet of assuming that the most important problem in providing effective tutorial dialogue technology was in overcoming the technical challenges. Since coming to CMU I have shifted my emphasis very sharply towards design based on detailed, empirically constructed models of the multiplicity of underlying mechanisms that are at work at the level of the individual student in the midst of human-human interaction. Here I discuss a few selected recent findings and results. A comprehensive list of my other funded projects is found in Section V.

A major focus of my research has been on working towards supporting collaborative learning processes with language technologies. Adaptive forms of collaborative learning support would be an advance to the state-of-the-art in computer supported collaborative learning in that it would allow support to be administered in an as-needed basis, and thus the support may be faded over time as students gain in their competency at valued collaborative behaviors and personal learning behaviors in a collaborative setting. Much evidence in favor of this cognitive apprenticeship model of learning, where support is faded over time, is prominent in the learning sciences literature.

Related to the long term objective of data driven design of adaptive collaborative learning support, with partial funding from an NSF ROLE/SGER (PI Carolyn Rose) and the NSF/IERI funded Learning Oriented Dialogue Project (PI, Vincent Aleven, CoPIs Albert Corbett and Carolyn Rose), I have been investigating how the characteristics of student or agent influence the behavior and learning of a partner student. Much of what is known about the mechanisms responsible for the success of collaborative learning is largely at the group level rather than at the individual level. And well controlled studies comparing learning across collaborative and non-collaborative settings have been relatively few. Even studies presenting evidence about specific effective patterns of interaction have largely provided correlational evidence, and thus do not offer insights on the causal mechanisms at work on the level of the individual student. But in order to design effective adaptive support for collaborative learning, we must gain insights at this level.

My work on this project started out with a focus on investigating previous claims about best practices in learning companion agent design that have not been subjected to rigorous evaluation. As a key part of this, I am advocating a particular experimental design methodology, which provides a highly controlled way to examine mechanisms by which one peer learner.s behavior influences a partner learner.s behavior and learning. Specifically, it makes use of confederate peer learners who are experimenters acting as peer learners but behaving in a highly scripted way. While this approach lacks the high degree of external validity found in more naturalistic observations of collaborative learning interactions, it provides complementary insights not possible within that framework. The type of insights provided by this type of design are essential for discovering precisely which combination of technological features will ultimately yield the most desirable response from students. By using a controlled experimental approach, we can get specific information about which aspects of the rich interactions are important for achieving the target effect. By using naturalistic collaborative learning and solitary learning as control conditions we can measure the extent to which the collaboration provides value as well as how the manipulated collaboration compares in effectiveness to more naturalistic collaborative learning. A series of studies conducted in this fashion were published together at CHI 2006 and nominated for a best paper award (Gweon et al., 2006).

One important piece of my work that bridges both the LTI and HCII, which is central to the agenda of developing adaptive collaborative learning support, has been the PSLC funded TagHelper project. The goal of this project have been to develop and use language technology to support verbal protocol analysis (Donmez et al., 2005; Donmez et al., submitted; Rose et al., in preparation). A key focus in this work has been developing techniques for exploiting natural structure in corpora and coding schemes in order to overcome the sparse data problem. This project has afforded me the opportunity to collaborate with technology researchers such as Jaime Carbonell and William Cohen as well as local behavioral researchers such as Bob Kraut and Kenneth Koedinger and especially learning scientists abroad such as Alexander Renkl and his group in Freiburg, Rainer Bromme and Regina Jucks in Muenster, Karen Schweitzer in Heidelberg, Manuela Paechter in Graz, and Frank Fischer and his group in Tuebingen. Pursuing this work is one of my major roles within the interdisciplinary Pittsburgh Sciences of Learning Center. Our Computer Supported Collaborative Learning submission about TagHelper was nominated for a best paper award in 2005. HCI aspects of the TagHelper project are covered in (Gweon et al., 2005), which was presented at INTERACT .05.

Building on the early success of the TagHelper project, an exciting development in the past year has been two successful evaluations of fully automatic adaptive collaborative learning support interventions. The purpose of these interventions is to .listen in. on student conversational behavior using text processing technology developed on the TagHelper project, decide based on that behavior when to intervene, and to offer support to make the learning experience more successful. In the first study, conducted in a class of sophomore thermodynamics students, we investigated the role of reflection in simulation based learning by manipulating two independent factors that each separately lead to significant learning effects, namely whether students worked alone or in pairs, and what type of support students were provided with. Our finding was that in our simulation based learning task, students learned significantly more when they worked in pairs than when they worked alone. Furthermore, dynamic support implemented with tutorial dialogue agents lead to significantly more learning than no support, while static support was not statistically distinguishable from either of the other two conditions. The largest effect size in comparison with the control condition was Pairs+Dynamic support, with an effect size of 1.24 standard deviations, where the control condition is individuals working alone with no support. The most important finding was that because the effect size achieved by combining the two treatments was greater than that of either of the two treatments alone, thus we conjecture that each of these factors are contributing something different to student learning rather than being potential replacements for one another.

In the second study with Taiwanese 10th grade students we evaluated an adaptive collaborative learning support mechanism in a science inquiry task where the primary learning activity was a brainstorming task. The process of learning during collaboration and the process of collaboratively producing high volume output or a high quality product are separate processes that may occur at the same time but may be at odds with one another. Emphasizing one of these goals, such as short-term productivity, may lead to a loss with respect to the other goal. For example, under realistic working conditions in order to speed up short term progress towards a solution, groups may fall into dysfunctional communication patterns such as quick consensus building behavior or resort to divide and conquer problem-solving approaches where team members work in relative isolation on the part of the process they already know. As a result, team members do not have the opportunity to exchange ideas and gain valuable multi-perspective knowledge or learn new skills. Learning may also result from the brainstorming process itself, as it provides the impetus to engage in constructing bridging inferences that are germane to the process of self-explanation. A significant correlation between idea generation productivity and learning in our data supports the view that learning from brainstorming may come from this constructive process. We suspect that we would not have observed this effect if students did not have the support of supplementary reading materials during their brainstorming. We attribute differences in learning between conditions to differences in how the dynamics of the brainstorming that took place affected how students processed the supplementary readings. Thus, an important question driving our investigation is how we can support both productivity and learning using adaptive collaborative learning support technology.

In order to investigate the trade-offs between productivity and learning, we ran a 2X2 factorial study. One independent variable we manipulated experimentally was whether students worked individually or in pairs. A second independent factor we manipulated was whether or not students had the support of the VIBRANT agent, which offers conversational contributions designed to embody principles derived from the social psychology literature on idea generation such as encouraging coherence in the interaction and providing stimulus in the form of suggested categories of ideas. In addition to evaluating the success of idea generation productivity during a single brainstorming task, we also measured learning from brainstorming as well as productivity on individual brainstorming in a subsequent different brainstorming task building on the earlier task. Students in the pairs condition were significantly less productive and learned significantly less during the initial brainstorming task than students in the individual condition. On the other hand, the students who brainstormed in pairs during the first session performed better on the second but related brainstorming task. Our finding was that success for students in pairs on the second brainstorming task was mediated in part by a broader task focus during the first task. This was evidenced in a higher conditional probability that an idea was mentioned during task 2 given that an idea conceptually related to it was mentioned during task 1 for students in the pairs condition. There was a relatively high correlation between this conditional probability and task 2 success. Furthermore, a detailed process analysis revealed that idea productivity decayed exponentially in all conditions such that half of the ideas contributed were contributed during the first five minutes. Consistent with the idea that process losses in group brainstorming occur as a result of cognitive interference from similar idea contributions, we determined that process losses were substantially higher during the first five minutes than in the remainder of the first task. Furthermore, if we look just during the period of time after the first five minutes, possibly as a result of reduced cognitive interference, the agent.s conversational contributions were effective for mitigating process losses. Thus, we conjecture that it would be possible to achieve a positive effect on all three of our outcome measures by having students work alone with no feedback for five minutes before working in pairs with feedback on the first task, and then do the second task as before. We plan to test this conjecture in a follow-up study this coming summer.

If you have any questions, don't hesitate to Send me email!

Carolyn Penstein Rose (cprose@cs.cmu.edu)/ Carnegie Mellon University