Model Based Monitoring and Diagnosis for Mobile Robots

Reid Simmons, Joaquin Fernandez1, Keith Golden2, Leo Joskowicz3, Martha Pollack4

Problem:

One of the greatest challenges in designing autonomous mobile robot systems is dealing with contingencies, unexpected situations that the robot must react to. In particular, it is very difficult for a developer to anticipate all possible contingencies in advance, much less decide how to detect them and handle them. This is exacerbated by uncertainty in the robot's perception of the world. We are investigating various techniques for modeling and monitoring contingencies. The ultimate goal is to have the robot choose for itself what to monitor and how to react, based on explicit models of the robot's sensors, behaviors, goals and environment.

Impact:

To be truly autonomous, robot systems need to detect contingencies on their own. If a contingency goes undetected, the robot may become damaged, or harm others, or, at best, cycle indefinitely trying to achieve its goals. Model-based monitoring will enable autonomous robots to handle a wider variety of contingencies, with greater confidence that unanticipated contingencies can be detected.

State of the Art:

Most current autonomous systems have hand-coded strategies for monitoring plan execution. They often do not fail gracefully when confronted by situations not explicitly anticipated by the designers. Designers generally do not know how extensively the existing monitors cover the space of contingencies, whether monitoring resources are being used efficiently, or whether there is ambiguity in what the monitors report.

Approach:

Model-based reasoning uses explicit models of the system and its dynamics, along with an inference engine, to deduce how the system behaves in various situations. For instance, given a model of the robot's actuators and sensors, one could deduce what the robot will perceive when it moves forward in a given environment. The underlying presumption is that it is easier (and less error prone) to provide models and a general-purpose reasoning mechanism than it is to explicitly enumerate all possible cases.

We are pursuing several different technical approaches to the problem of monitoring and diagnosis for mobile robots. In work with Joaquin Fernandez, we are using a hierarchy of coarse-to-fine monitors [1]. The coarse monitors provide good coverage, but poor resolution (e.g., if a timeout monitor triggers, you know something went wrong, but have no idea what), while the more detailed monitors provide better diagnostic capabilities. By modeling the problem as a POMDP, we can cleanly integrate the reports of various monitors to help diagnose the root cause of the problem. The Markov model is also used to choose recovery actions that have high utility. This work has been demonstrated on the Xavier robot.

In collaboration with Keith Golden and others at NASA Ames, we are pursuing an approach that uses symbolic models of the system to infer faults in the robot. Livingstone, the modeling language [3], can model both the hardware and software of a system. Monitors track commands and convert sensor data into qualitative values, which are used by Livingstone to detect inconsistencies in the model. This approach can deal with sensor failure through the use of redundant information, and can be used to automatically synthesize recovery strategies. This work is being demonstrated on several robots, including Xavier and Nomad at CMU and the Marsokhod at Ames.

Many of the contingencies encountered by mobile robots stem from their interactions with the physical world. Many of those interactions are spatial in nature. In joint work with Leo Joskowicz, we are exploring the use of both geometric and topological representations of space to reason about how a mobile robot perceives and interacts with the environment. The goal is to have the robot itself synthesize monitoring strategies that enable it to remain safe (such as not falling down stairs) and active (such as noticing when it has gotten itself into a closet).

Finally, in collaboration with Martha Pollack, we are investigating the use of decision analysis to determine at what times, and how, an agent should monitor in order to maximize its utility. The goal is to develop a general framework, and specific domain-independent strategies, that would enable an automatic planner to add monitors to its plans in an effective and computationally tractable manner.

Future Work:

Many of the projects described above are in early stages of development (especially the last two). We do not yet have enough experimental evidence to know which techniques work well, and under what circumstances. Clearly, though, there is an opportunity for synergy - for instance, combining the probabilistic method for handling uncertainty with the qualitative model-based reasoning of Livingstone, or adding explicit spatial reasoning to any of the other approaches. Much work remains to be done before autonomous robots can reliably monitor their own behaviors.

Bibliography

1
Joaquin Lopez Fernandez and Reid Simmons.
Robust execution monitoring for navigation plans.
In Proceedings International Conference on Intelligent Robotics and Systems, Vancouver, Canada, October 1998.

2
Reid Simmons.
Becoming increasingly reliable.
In Proc. of 2nd Intl. Conference on Artificial Intelligence Planning Systems, Chicago, IL, June 1994.

3
Brian C. Williams and P. Pandurang Nayak.
A model-based approach to reactive self-configuring systems.
In Proceedings National Conference on Artificial Intelligence, Portland, OR, August 1996.

About this document...

This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998).
The translation was performed on 1999-02-19.


Footnotes

...
Now at University of Vigo, Vigo Spain
...
NASA Ames Research Center, Moffet Field, CA
...
The Hebrew University, Jerusalem, Israel
...
University of Pittsburgh, Pittsburgh, PA