! User Interaction
MINERVA: Carnegie Mellon's Robotic Tourguide Project Minerva's Image

Interaction with Humans

Minerva, like any tourguide, must engage her audience through convincing interaction. To this end, we have provided her with several visual and sound features that can be controlled internally, and extended the localization module to find the positions of people around the robot. For the task of guiding people through the museum, interaction serves two distinct purposes. The first is to attract visitors to the robot, and maintain interest in the tour. The second purpose is to make participants aware of the robot's intended direction of travel to facilitate navigation in crowded areas.

A wacked-out museum visitor.

Minerva's face consists of a movable mouth and eyebrows, and two video camera eyes, mounted on a pan/tilt neck. She can say several phrases in addition to the standard tour speech. Control over these features has initially been pre-programmed, but we will take the opportunity to interact with thousands of people to explore the possibility of learning interactive behavior online. Such learning would seek to maximize the number of people in the vicinity of the robot while keeping visitors from getting too close, and from blocking movement completely.

Minerva also can identify when people are looking at her; there is a collection of the last 10 people to be seen by Minerva at the webface page.

Home The Java-Based Image Stream About this project