Features
People
In the News
FAQ
About Us
Contact Us
Photo Album
Big/Little Sisters
Alumnae Link
Advice Network
Outreach
Current Events
Event Archive
Jobs
Scholarships
Research
On Campus
Articles
Papers
Organizations
Conferences
Other Sites

>> Home

Robots Lending a Helping Hand

Background
              Allison Bruce graduated from CMU in 2000 with a BS in computer science. Her undergraduate research involved creating architecture for dramatic improvisation between two mobile robots (Robot Improv). She is now a graduate student at the Robotics Institute, advised by Reid Simmons and Illah Nourbakhsh. Her current research project is the Social Robots Project. Her research interests involve the application of machine learning and planning techniques to the problem of human-robot interaction.

            Allison is involved in Women@SCS and the Pittsburgh chapter of Computer Professionals for Social Responsibility. In her free time, she enjoys reading, watching films, and attending local arts events.   


How did you become interested in robotics?

Allison: I was in Wean one day and saw a poster for the Intro to Mobot Programming Lab class. The poster said, "Do you not have enough robots in your life?" And so I thought, "well, maybe I don't." So I decided to take the course and got totally obsessed with it. I spent tons and tons of time hacking on robots at 3AM and watching them crash through cardboard walls. There is something very satisfying about it. I really like programming something and then watching it move. It's so much more engaging that way and I find that very rewarding.


How did you become interested in social robots particularly?

Allison: I worked on an undergraduate research project called Robot Improv. This basically consisted of two robots improvising short plays. It was pretty goofy. Surprisingly, people actually enjoyed watching it (even though it was basically these two little robots that looked like trash cans chasing each other around). The fact that people actually showed interest in interacting with robots gave me the idea to pursue social robots.



What were some of the main motivations for your social robot project?

Allison: A lot of fundamental problems in robotics are starting to be solved. For example, navigation in a known environment, given a good map of the environment is becoming a solved problem. This changes the way we can potentially use robots. What we want to look at now is how we can make robots that people can interact with. People who are not researchers or people that actually program the robots, but just regular people that might need a robot to help them with their day-to-day tasks.

What are some examples of day-to-day tasks that might require help from a robot?

Allison: You can well imagine a robot doing some sort of menial work in an office or in a hospital. Environments like these have lots of people around. Because of that, the robot must have some sort of social competence in order to not be an obstacle to people doing what they need to do during the course of the day. For example, you want the robot to have an understanding of the social rules that people use to regulate their behavior in crowds. You want a robot to move down a hallway the way that people do, staying on side, and passing only when they should pass; not swerving around people as if they were obstacles and disrupting things. We are interested in whether or not we can encode these social rules or ideally, have the robot learn these social rules through interacting with people and gathering data.

What type of robot are you using for this project?

Allison: I'm currently working with a Real World Interface B-21. It's a typical research robot.

Robots are often looked upon as strange and unusual entities, and are sometimes even feared by the general public. How are you accounting for this when designing your robots?

Allison: Even though robots aren't humanoid looking at all, people tend to anthropomorphize anything that they interact with that moves. So we're trying to support that anthropomorphism to a degree. Our robots have a humanoid face. It's basically a computer model that is displayed on a screen that is mounted on the robot. The face is something that is familiar to people. We are interested in, not just having the face there because it's cute and makes the robot seem friendlier, but also because people monitor each other facial expressions to get an indication of how that person is doing in an interaction. So we want to use facial expressions as a mechanism to convey information to people about what the robot is doing or how the robot thinks it's accomplishing its task. 

Can the robot actually communicate with people?

Allison: It does to a certain degree. Our robots can execute scripts where it speaks certain dialogues. And it moves through these scripts using a finite state machine to define its behavior. It transitions through this finite state machine based on perception. Right now, we don't have any dialogue capabilities because we do not have speech recognition on the robot. However, one of our short-term future goals is to work on a limited form of speech interactivity where the robot will be able to use keyword matching to get responses from a person.

Have you field-tested your robot?

Allison: We did one large experiment, looking into something very fundamental. And that is that people have a common idea that robots should seem more human to improve interaction. Robots should have a face and be capable of expressive movement and behavior. But no one has actually tested this idea in a rigorous way. So we designed a psychology experiment with two variables. One was whether the robot had a face or not and the other was whether it would turn towards the person and focus its attention on the person when talking with them or not. The robot performed the social task of asking passersby a poll question. We measured the robots success at this task when we manipulated those two variables. We found that having the face and movement individually as well as together improved the level of success. Each of the variables were important in making the robot seem more lifelike, therefore people like it better, and want to interact with it more.

What aspects of this project are you currently trying to improve?

Allison: Right now, I'm trying to get the robot to recognize a person's intentions and use that to respond more intelligently. For example, in the last experiment, the robot would say "Hello" to everyone that passed by. However, from observing people as they walked through the door, it was clear that some of the people were definitely going to talk to the robot, while others were not. So we want to try and encode something so that the robot can look at a person and guess whether the person would want to talk to them or not. Then, the robot would only address the people that it thinks might actually be interested in talking to them. I'm using some machine-learning techniques to try to learn models of those two behaviors and allow the robot to distinguish between them.

 

 

Questions or Comments? Email us at women@scs.cmu.edu.
This page was last modified Tuesday, 25-Jun-2002 17:38:40 EDT
All materials contained herein are copyright Carnegie Mellon women and computer science women in computer science women and computer science women in computer science women and computer science women and computer science women in computer science women and computer science