Xavier - Home Page Amelia - Home Page Guestbook
[Watch] [Interact] [Scrapbook]

AAAI'93 - Analysis of Xavier's Performace


Xavier is built on an RWI B24 base. The base is a 24" diameter, four-wheeled, synchro-drive mechanism. Sensors include bump panels, a Denning sonar ring, Nomadics laser light striper, and a color camera mounted on a Directed Perception pan/tilt head. On-board computation consists of two 66 MHz Intel 486 computers, connected to each other via Ethernet and connected to the outside world via a 1 Mbit radio Ethernet. Xavier has a distributed, concurrent software system, which run under the Mach operating system. All programming is done in C, and processes communicate and are sequenced and synchronized via the Task Control Architecture.

Communication with Xavier is mainly speech-driven. An off-board Next computer runs the Sphinx real-time, speaker independent speech recognition system and a text-to-speech board provides speech generation. Thus, we can give verbal commands to the robot and the robot can respond verbally to indicate its status. In addition, a graphical user interface is available for giving commands and monitoring the robot's status.

For the "box rearrangement" task we custom built an arm. It had a large V-shaped end effector that was mounted with roller bearings to guide the box to the center, where two electromagnets were used to hook on to metal plates mounted on the corner of the boxes. The arm enabled Xavier to lift the boxes over its "head", simplifying planning of subsequent movements. Xavier was the only robot in the competition that actually picked up the boxes, and this turned out to be both a reliable means of moving boxes and a great crowd pleaser.

Event I: Escape From Office

For this event, the strategy employed was to use the first minute to follow the walls of the office, using dead-reckoning to compute the extent of the room. After a minute, Xavier headed to the center of the room and began scanning the walls for the three markers that denoted doors. Once finding all three markers, it repeatedly looked in the direction of each one. If one of the markers disappeared from view, Xavier assumed that that indicated an open door and headed in the direction where it last saw the marker (if it never saw one of the markers, as occurred in the actual competition, Xavier headed toward the middle of the wall that didn't have a marker). Once outside the bounds of the room, Xavier used its sonars and laser, together with a potential field approach, to navigate around the obstacles to the finish line. It used a combination of dead-reckoning and sensor interpretation to determine when it had actually reached the end of the arena.

In the preliminary heats, Xavier posted one of the fastest times. Its vision-based strategy worked flawlessly, and its local navigation routines enabled it to skirt obstacles while maintaining good forward progress. We actually could have gone quite a bit faster outside the office, but maintained a conservative speed to avoid hitting any obstacles. Xavier fared less well in the finals. The problem was that it got stuck in a corner for most of the first minute, and so did not have a chance to circumnavigate the room before heading to the center to begin visually scanning for markers. As such, it thought the room was smaller than it really was. The problem was that, wanting to eliminate spurious marker sightings, we programmed Xavier to ignore perceived markers that were far outside its model of the room. Thus, Xavier mistakenly thought that the wrong door was open (since it ignored a marker it had actually recognized), and headed for that "door". When it was found to be closed, the robot headed back and looked for markers again. This cycle repeated until time expired.

In retrospect, we should have trusted our vision more and explicitly encoded the probable office dimensions (as many of the other contestants did). In any case, alternative strategies to avoid looping behavior would have been helpful, to say the least.

Event II: Office Delivery

Speech recognition was used to input the map at the time of the competition. We described, in natural language, the sizes and locations of the rooms and corridors. Xavier acknowledged verbally that it had understood, and displayed the map graphically. Speech input was also used to indicate the quadrant where the coffeepot would be found and where to deliver it.

As with most of the entrants, Xavier started by trying to localize itself. Xavier went forward until it found a wall, then followed walls until it detected it was in a corridor (denoted by its sonar signature of two straight, parallel sides). Once in a corridor, the robot navigated in the direction of the corridor, turning only when it found the end of a corridor. While navigating, Xavier analyzed the sonar readings for evidence of doorways. To compensate for noise in the sonars, evidence was accumulated using Bayesian updating until the robot had enough confidence that a door was actually present. At that point, the robot would stop in front of the doorway, pan the camera toward the sides of the doorway, and look for the perceptual markers.

Once a marker was found (along with the corresponding bar code), the robot had localized itself in the map. It then was to plan a path to one of the rooms in the quadrant with the coffeepot, navigate through the corridors to the room. The corridor navigation used a combination of dead-reckoning and feature detection to find junctions and doors that corresponded to locations on the map. Once in a room, Xavier would visually search for the coffeepot. If not found, it would go to and search another room. When the coffeepot was found, it would navigate to the delivery room.

In the competition, fairly quickly found its way out of the office and into the corridor. Xavier wandered a bit, however, before finally finding a door with a marker. Its corridor navigation was fairly robust, but its "door detector" found many false positives, meaning that the robot stopped numerous times unnecessarily to look for markers. We later determined that this was probably caused by specular reflections from the metal poles used in building the corridors. After finally localizing itself in the map, a software bug (which had never shown up during testing) caused the path planner to crash, ending the run. We started up in the corridor near the marker, and quickly found the marker a second time, but the same bug reappeared, ending our chances.

Event III: Box Rearrangement

In this event, we outfitted Xavier with its custom built arm, mounted on the rear of the robot, and outfitted the boxes with metal plates attached to each corner. Speech was used to input the desired configuration of the boxes. Xavier began by using visual search to find boxes with marks on them, which also gave it an estimate of the distance and direction to the box. The local potential-field navigation was used to get near the box. Turning to face the box, Xavier analyzed its laser readings to find the edges of the box, and once oriented near the edge of the box, it again used the laser to find the distance and direction to the box's corner. It repeated this procedure until it had lined up with the box, then turned around, lowered the arm, backed up into the box, energized the electromagnets to grasp the box, and then lifted it up and over the "head" of the robot.

The robot was then to navigate near to the partition and place the box down to help form the desired pattern. It would then wander off for several seconds, and once in a new location, visually search for another box to acquire. In the competition, the box detection and acquisition strategy worked quite well. In the approximately fifteen minutes Xavier was allotted, it successfully picked up three boxes (four were required to complete the given shape). It showed less skill, however, in depositing the boxes near the partition. For the first box, it did not get close enough, and put the box down far from the partition. For the second box, it put it right on top of the partition (to the immense delight of the crowd). For the third box, a software bug ended the demonstration before Xavier could move the box to the correct position. The first two problems were attributable to dead-reckoning uncertainty and to too strong expectations of where the partition would be located.


The robot hardware was not completed until June 1993. Thus, much of the software development was done using a simulator. While that is better than nothing, we all know what simulation is worth when you are trying to act in the real world. Also, we did not have adequate opportunity to characterize and calibrate sensors and dead-reckoning. This hurt us, particularly in the office delivery event (even though we came in second there!).

Overall, the types and mix of sensors on Xavier worked to our advantage. We made reasonably good use of multiple modalities. In particular, we were one of only a handful of competitors that used vision. This made us very competitive in locating objects, as opposed to navigating and finding them via sonar or laser. In general, the vision system was quite reliable (in fact, we got into trouble during the first event by not trusting vision enough). It is fairly slow, however, taking 4-5 seconds to process an image.

Local navigation and obstacle avoidance worked very well. We only nicked a wall once (during Event II). Xavier, however, does not move all that fast: we felt that 30-40 cm/sec was the limit at which we trusted the obstacle avoidance software. Obvious improvements in computation and algorithms (cf U Mich) should enable us to double our speed.

Both a strength and weakness of our approach was our use of plans. Other entrants were much more reactive (behavior-based). Our use of plans put us at an advantage when the robot needed to perform a complex series of actions (such as in the office delivery event). It was a disadvantage, however, when things went wrong (such as in the finals of the first event). In such cases, the fact that Xavier maintained rather strong expectations about the state of the world was a hindrance rather than an advantage. The anticipated solution (for next year!) is to use the monitoring and exception handling features of TCA to detect when expectations are not being met and to patch plans or replan when needed. We believe that this will lead to a much more robust and reliable system.

Author: reids+@cs.cmu.edu
Last Updated: 15Jun94 18:00 josullvn+@cs.cmu.edu