Several new social robots are expected to start prowling the halls (and playing games) at CMU this year. But giving a robot personality takes a lot more work than just putting on a happy face.
By Jason Togyer
A child who grew up watching TV in the 1960s, '70s and '80s would be forgiven for assuming that she'd have a robot pal by now.
After all, according to "Lost in Space," robot B-9 was always present to warn Will Robinson of danger in what was (in the 1960s) the far-off year of 1997. On "The Jetsons," Rosie the Maid was a helpful (if sarcastic) electromechanical household companion in the 21st century.
In his book "Your Flying Car Awaits," author and historian Paul Milo reports that it wasn't just TV scriptwriters who assumed that robots that work closely with people would be a feature of everyday life by the year 2011. Responsible, respected futurists working for organizations such as the Defense Department, the Rand Corporation and IBM also figured that by the 21st century, robotic butlers, cooks, chauffeurs and babysitters would be commonplace.
Indeed, writes Milo, some technologists were concerned that we'd have too many robots by now. They speculated that humans would either be thrown out of work or that they'd object to being served by robotic assistants. One researcher even suggested that chimpanzees be trained to take over jobs (such as driving cars!) that humans wouldn't want done by robots.
The problem of training monkey chauffeurs to take over from robots hasn't yet developed. And developing practical robots that work closely with people and which can respond using human communications methods--social robots--turned out to be a lot harder than futurists suspected two generations ago.
In fact, the creation of a social robot is still so new "it's almost a craft process," says Jodi Forlizzi, associate professor of human-computer interaction, who's part of CMU's Project on People and Robots.
But research into social robots has been ongoing since the 1940s, when American-British neuroscientist Grey Walter, a pioneer in the use of electroencephalographs to study brain waves, speculated that many of the functions of animal brains could be simulated by electrical components. By 1951, Walter had built crude but working autonomous robots that exhibited almost animal-like behavior; they reacted to noises and lights and could be "taught" rudimentary activities, such as "begging" for attention.
With these early successes in cybernetics--and with transistors and integrated circuits leading steadily to more and more powerful digital computers--it was natural to assume that robots such as Walter's also would get smarter and more humanlike.
. . .
"After Moore's Law was articulated, some scientists basically extrapolated present-day trends about the pace and increase in technology, and figured that within the next 20 years or so, they'd have robots that would be smart enough to take over these jobs," says Milo, speaking from his home in New Jersey, where he's currently working on a book about higher education.
"But it wasn't necessarily a case of making computers faster and smarter," he says. "There has to be some sort of 'quantum leap' that bumps us from one track to another. You can continue adding horses to a carriage, and you'll get a carriage that runs faster--but you won't have an automobile."
Indeed, processing speed isn't the limiting factor for determining whether a social robot can successfully interact with humans in their environment. Instead, the problems include those of detection, interpretation and communication--recognizing human beings and understanding what they need--and they have layer upon layer of complications. Not only do robots have to understand what humans are doing--the humans have to intuitively understand what the robots are doing, without the need for interpretation.
"Robots have all sorts of limitations in their social interactions," says Manuela Veloso, the university's Herbert A. Simon Professor of Computer Science. "Sometimes they may not understand what you say. Other times, they may not be able to complete a task. What social robots are able to do right now is very limited." Veloso and other roboticists are trying to expand the horizons for social robots and remove those limitations.
Veloso leads a research group called CORAL--for Cooperate, Observe, Reason, Act and Learn--that studies the way groups of autonomous robots can be programmed to work together on tasks, and teaches a project course in designing intelligent humanoid robots. She's also among the faculty members who will be conducting real-life tests of social robots on campus this year.
Veloso's CoBots are designed to deliver mail and other items to campus offices and act as companions and tour guides to visitors. Other social robots that will be testing soon at Carnegie Mellon include Gamebot, which will be able to play Scrabble with individuals and groups, and Snackbot, which will deliver treats upon request.
. . .
Social robot research at CMU has a long history that builds on the legacy of pioneering research into artificial intelligence by Simon, Allen Newell and others. Today's projects are spiritual descendants of CMU's Social Robots Project, an interdisciplinary effort begun in the 1990s as a joint project of the School of Computer Science and the School of Drama.
The Social Robots Project set as its goal creation of robots that had "personalities" and which could be given tasks and interact with human beings according to social conventions. It eventually spawned "VIKIA" and "GRACE," which could engage in some of the same activities as a typical college student, including giving a presentation at a conference; and in 2004, "Valerie," CMU's first robot receptionist, or "roboceptionist."
The stakes for developing social robots are quite high. Lessons learned from projects such as Snackbot or Gamebot, for instance, could eventually inform work being done at Pittsburgh's Quality of Life Technology Center and lead to improvements in robots that perform rewarding tasks for society, including care for the elderly and handicapped, tutoring children, and reaching out to people with developmental disorders such as autism.
Industrial robots have been a common fixture of the developed world since the 1970s, and robotic rovers have played an ever increasing in fields such as space exploration and search-and-rescue operations. Commercially available robots such as the semi-autonomous Roomba vacuum cleaner have also become common.
Yet when the general public thinks "robot," they don't often imagine a disembodied arm that welds fenders or a scientific robot gathering samples on the surface of Mars. Instead, they usually picture a science-fiction robot such as C3PO of the "Star Wars" movies.
. . .
Speculative fiction about robotics has "layered on expectations," says Forlizzi, whose background includes work in industry as an interaction designer and as a researcher on new product development. She's currently part of several social robot projects currently underway at CMU, including Snackbot and the Home Exploring Robot Butler, or HERB, which is a joint investigation of Intel Research Pittsburgh and the Quality of Life Technology Center.
While both Snackbot and HERB are social robots, they couldn't be more different in appearance or purpose. Snackbot is child-sized (four-and-a-half-feet tall), enclosed in a smooth plastic housing, and has a round face with two "eyes" and a digital mouth. It has two arms, but they're fixed in place, designed to support a serving tray.
HERB is larger and more industrial in appearance, and has two highly mobile arms that can grab objects--such as canned goods or utensils--and bring them to a human. As a result, HERB is a more sophisticated robot, but it also has the potential to be off-putting, Forlizzi says.
"It's huge," she says. "Would you be comfortable with it in your home?" And because HERB is designed as a robotic assistant for people with limited mobility, such as those with spinal cord injuries, a task that sounds straightforward--like fetching an object--poses several serious challenges for its developers, including consideration of the social and emotional needs of the people whom HERB will be assisting. It's hard enough for HERB to successfully navigate a kitchen or dining room, but it's also got to avoid sudden movements that seem alarming.
"If the robot just brings something to you and shoves it into your face, that's a little bit intimidating," Forlizzi says. "We have to find a better way to make it more social."
When interacting with humans, she says, robots have to move and communicate in ways that mimic polite human behavior. They need to meet a person's gaze, move in ways that aren't threatening, and avoid invading "personal space." If they communicate using spoken language, they need to understand when and how to interrupt someone.
"In order to have good social interaction, a social robot has to be aware of the context around it," says Reid Simmons, associate director for education at CMU's Robotics Institute and a research professor of robotics and computer science. "Let's say I'm a pill-dispensing robot, and a person is supposed to take a pill three times a day. If someone is napping, I probably shouldn't wake them up to give them a pill."
In other words, when a robot is placed in a setting with humans, it needs to act like a human, says Paul Rybski, systems scientist in CMU's Robotics Institute. "Usually, the more anthropomorphic you can make them, the easier it is for people to try to use their social communication skills to interact," he says.
Like Forlizzi, Rybski is a member of the team working on Snackbot, which is designed to incorporate as many off-the-shelf components as possible. It "listens" using a microphone that's designed for tele-conferencing applications and which can pinpoint the direction of the loudest voice in a room. It detects obstacles using laser sensors sold for industrial applications such as inspecting pipes or measuring distances.
The availability of those components enables robotics researchers to spend less time worrying about hardware and more time refining the software that predicts and interprets human behavior. But while the sensing technology has become less expensive--in part due to the widespread use of robots in industrial settings--interpreting the inputs is still difficult.
"All of these things that we take for granted in people, that we can see, that we can move around obstacles, that we can go from place to place--from a roboticist's point of view, just building a robot that can navigate an environment by itself is an accomplishment," Veloso says.
Adding a social interface compounds the difficulties. While a directional microphone can help a robot detect where the loudest noise in a room is coming from, it needs signal-processing software to determine whether it's receiving a spoken command or just hearing a passing conversation. Proximity sensors and cameras can "see" an obstacle blocking a robot's path, but it needs to be able to tell a person from a trashcan. Simultaneously interpreting multiple inputs--detecting movement as well as noise, and determining whether the object moving is also the object making that noise--is another serious programming challenge, Rybski says.
Snackbot will have limited ability to engage in spontaneous activities. While it will be able to independently navigate corridors in Newell-Simon Hall, its more important role is to serve as a research platform for studying human-robot interaction over a long term.
The team is especially interested to see if people modify their own behavior around Snackbot after repeated encounters, Forlizzi says. Will they appreciate and understand Snackbot's user interface? Will they welcome the addition of Snackbot to their daily routine? To capture the information, Snackbot will make a video and audio record of its day-to-day activities that researchers can then mine for data.
"There's a lot we don't know yet about human interaction with non-human objects, and we don't have a lot of ways to get unbiased data on those interactions," Rybski says. "We need to study why people respond to one robot, but not another."
. . .
For social robots to be truly useful, interaction needs to be intuitive. "You have to be able to rely on someone's existing knowledge," Rybski says. The user of a robot such as HERB has to be able to talk to the robot using plain commands and then understand the robot's feedback immediately. "You can't hand them a 3,000-page manual or ask them to take a course," he says. "It's got to be able to interact with people in a way that they're comfortable with."
Unfortunately for Snackbot, Rybski says, "people are notoriously difficult to interpret and understand--just ask any human."
Ethnic background, native culture, gender, age, education level--all are factors in how people interact with one another, says Simmons, who calls humans "infinitely variable."
"When humans interact with each other, they can accommodate that variability," Simmons says. "A robot has a fairly limited range of things it can react to. The traditional view of interaction is turn-taking--I do something, and then you do something. But that's not really an accurate model of how people interact. It's more like a dance--we're constantly changing our interactions based on the feedback we receive."
Take the simple act of telling a joke, Simmons says. If the person telling a joke senses through non-verbal cues--an arched eyebrow, a disgusted or puzzled expression--that her listener is offended or doesn't understand, she can adjust the tale or stop altogether. "Gaze, gesture, posture are all incredibly important," Simmons says. And if something unexpected interrupts a conversation--a fire alarm, a scream, one of the participants suddenly fainting--humans would understand what to do, but a robot designed to "take turns" could be stymied.
Simmons uses an example of the automatic checkouts now common in supermarkets and discount stores--they prompt the users to perform specific tasks, such as moving their purchases, even if the person's already done that.
Having a robot that can offer directions and guidance "in a way that doesn't annoy people is very important," Simmons says. Robots also need to be able to signal when they don't understand a task--either by saying "I don't understand," or by some non-verbal cue, such as a tilt of their "head."
Simmons was the lead developer of the Carnegie Mellon "roboceptionists" who have greeted visitors to Newell-Simon Hall since 2004. Though the "roboception" desk is currently occupied by a robot named "Tank," the first roboceptionist was the much-chattier "Valerie." Both were developed in cooperation with CMU's College of Fine Arts.
Experiences with Valerie and Tank gave researchers valuable insight both into human-robot interaction and into the creation of experiments involving social robots, Simmons says. "We had certain ideas about what social interaction with a robot would be like," he says, but their experiments quickly hit the limitations of a "receptionist" framework.
For instance, the developers expected that visitors would spend one-on-one time with the roboceptionist. Instead, they tended to approach the roboceptionist in groups of two or three. "But the robot doesn't understand group interaction," Simmons says. "If you ask it to tell you its name, it will tell you, but if another person in the group asks, it will say, 'I already told you my name.'" Valerie and Tank have no way of knowing that a different visitor was "talking."
The roboceptionists also lacked personalization. Though someone might pass the roboceptionist every day, the robots have no way of recognizing her, and treat her as if she's visiting CMU for the very first time. "That's almost the exact opposite of human-human social interaction," Simmons laments.
And tasks performed by the roboceptionist lacked scope--a visitor asks a question, and Tank provides an answer. Then the visitor moves on. That doesn't provide much time for roboticists to study the interaction process.
. . .
Simmons' new Gamebot project will incorporate several features not available in the roboceptionist. Gamebot will recognize and remember players' faces and will engage them in a significantly more difficult task--playing the word game Scrabble--that enables the roboticists to better study group dynamics.
"We chose Scrabble because it encourages multiple people to play, and it's a fairly long game, but you can leave Scrabble at any time, so you can play as much or as little as you want," he says.
Gamebot will keep statistics on when and how often people play, adjust its own gameplay to accommodate a specific user, and will recognize patterns--including if a player doesn't visit for a length of time. "The idea is that if we can personalize the interaction, will that make a difference in how people interact with the robot?" Simmons says. "Do people who get personal treatment tend to come back more than people who don't?"
The success of a social robot such as Gamebot can be evaluated in part by its ability to build, maintain and expand its relationship with humans over a period of time. But few social robots will need to lean on those relationship skills as much as the CoBots (short for "Companion Robots") currently under development in Veloso's lab. Veloso, president-elect of the Association for the Advancement of Artificial Intelligence, says CoBots are designed to create a "symbiotic relationship" between humans and robots--while the robots will help humans, humans also will have to help the robots, which will be unable to complete certain tasks without assistance.
"I think one of the key ingredients for a social robot to be successful is to demystify it," she says. "As CoBot moves in the world, if it cannot perform a task, it asks for help."
Both CoBot 1, which was activated in 2009, and CoBot 2, which came online last year, are going to need winning personalities, because there's nothing overtly cute or vaguely anthropomorphic about either one. ("To me, it's a robot," Veloso says. "I'm not trying to make it nice and pretty--I'm trying to make it functional.") Built on wheeled omnidirectional platforms designed by Mike Licitra of CMU's National Robotics Engineering Center, the two CoBots don't have faces or arms. They can receive commands verbally or from a keyboard.
"CoBot can't lift objects, and it can't press buttons," Veloso says. That's a challenge for a robot that's supposed to deliver mail and other items, and escort visitors from place to place, but it's a limitation that Veloso has embraced. CoBot provides opportunities to learn how social robots should approach humans for assistance, and also to determine how social robots should fail at tasks--for instance, how to behave when they can't understand a request.
. . .
The CoBots also will have a wide area to experience human interaction. Unlike Gamebot, which is stationary, or HERB, which is designed to operate within in a house or apartment, the CoBots are supposed to roam throughout the Gates and Hillman Centers and eventually around the Carnegie Mellon campus. CoBot 1 navigates by calculating its proximity to the wireless network antennas that are common sights on campus; CoBot 2 finds its way using Hagisonic's StarGazer robotic navigation system, which requires placement of a series of adhesive dots along hallways and other passages.
"We've tried to enumerate different tasks that could capture different problems that need to be solved--scheduling, navigation, identifying visitors," says Veloso, who is planning to create a total of 10 CoBots over the next five years. Users will be able to request certain tasks--escorting a visitor from place to place, fetching a parcel--through a web interface. Ultimately, the CoBots also will be able to communicate with one another to divide tasks.
Veloso says it "doesn't make sense" to develop just one CoBot. "We're not in the business of interacting with one person at a time, but with many people," she says. "So we should have many CoBots."
Future areas for exploration include developing the ability of CoBot to adapt to unexpected input, Veloso says. "Right now, if it's moving down the corridor and you say, 'Hello, CoBot,' it ignores you," she says. "In the future it's got to respond to spontaneous interaction." And it remains to be seen how people will react when the sight of an autonomous robot in the hallways is no longer a novelty, but an everyday occurrence. "Inevitably, as it's moving around and talking, will people get annoyed and yell at it, or will they be happy at the sound of its voice?" Veloso says. "All of these problems are things we need to study. These are the kinds of questions that interest me."
It may have taken a lot longer than 1950s futurists imagined, but social robots are likely to become ubiquitous in people's lives, says Simmons, especially in providing assistance to the elderly or disabled. "They will become the most complicated technology that people will interact with, and they'll be operated by novices--people who don't have training in robotics," he says. "And my feeling is that we can either make the technology something that's easy to learn, or we can make it something they're familiar with and don't have to learn."
Robots suitable for home use are still limited in their ability to navigate autonomously and manipulate objects, Simmons says, but those capabilities are steadily improving, and researchers need to be pushing the development of social interfaces at the same rate. "My hope is that when the manipulation and mobility technologies are ready, the social interfaces will be ready," he says.
Jason Togyer | 412-268-8721 | jt3y [atsymbol] cs.cmu.edu