Formal Verification of Robotic Systems

Reid Simmons, Edmund Clarke, Michael Lowry1, Bwolen Yang

Problem:

Many autonomous systems need to be extremely reliable, such as those that interact with people, or are in critical applications, or are inaccessible (such as space vehicles). Developers would like to ensure that their systems are safe before fielding them. We are working on methods that use symbolic model-checking to formally verify critical properties of autonomous systems. The basic idea is to develop techniques that can be used by people who, though they may be experts in autonomous systems, are unfamiliar with formal verification. We want to enable them to use formal verification as part of the normal iterative design and development cycle.

Impact:

For many safety-critical autonomous systems, validation and verification are the most expensive parts of the development process. The potential impact of being able to detect errors early in the development cycle is tremendous. Such techniques will also have a large impact on the acceptance of autonomous systems - currently, many people feel that autonomous systems are inherently unreliable because they are too complex to be bug-free. While not a panacea, formal verification can help alleviate many of these fears.

State of the Art:

Currently, autonomous systems are validated only after they are implemented. Even then, only a small subset of their possible behaviors can be validated, given the difficulties of testing such large, complex systems. Developers can never be sure whether the tests performed cover all the critical situations in the domain and whether catastrophic failure modes are still present.

Approach:

The SMV model checker, developed at Carnegie Mellon [1], represents systems using a formalism based on concurrent communicating transition systems. It uses the representations to verify specifications written in a temporal logic (CTL). The specifications can indicate, for instance, that some condition should never hold, or should sometimes hold, or holds at some (all) points in time after another situation holds, etc.

Our basic approach is to develop algorithms that can, on the one hand, translate automatically from the languages used to implement autonomous systems to the SMV modeling language and, on the other hand, to create explanations in the terms of the original implementation language for any counterexamples to the specifications found by the model-checker. The work involves creating formal models of the implementation languages, writing translators to/from the implementation languages and SMV, developing ways to create useful explanations from the model-checking results, determining how to enable developers to easily write specifications for verifying autonomous systems, and extending SMV to handle the large and complex models created by this process.

We are applying these ideas to two languages. TDL [3] is used to encode task-level control strategies (see research summary on ``Languages for Executive Task Level Control''). A TDL program is modeled as asynchronous, hierarchical finite state machines that have synchronization constraints between them. SMV is used to check for properties of the control programs such as liveness, safety, and the absence of resource conflicts. Livingstone [4] is a language for model-based diagnosis, used for autonomous spacecraft and robots (see research summary on ``Model Based Monitoring and Diagnosis for Mobile Robots''). Livingstone is modeled as a hierarchy of parameterized concurrent transition systems. SMV is used to check for properties such as model consistency and state reachability. Translators for SMV have been written for both languages, and experiments are being performed to determine the utility and tractability of this approach to system verification.

Future Work:

We still need to address the problem of creating useful explanations from the SMV counterexamples. Currently, one must analyze the SMV output directly and map, by hand, from the counterexamples back to the original implementation. The formal models for both TDL and Livingstone need to be extended: In the case of TDL, to model the language more completely (e.g., exception handlers and iteration); In the case of Livingstone, to better match the actual behavior of the Livingstone inference engine. Finally, we need to show that this approach actually leads to more reliable systems that are easier to develop and debug.

Bibliography

1
Edmund Clarke, David Long, and Kenneth McMillan.
A language for compositional specification and verification of finite state hardware controllers.
Proceedings of the IEEE, 79(9):1283-1293, 1991.

2
Michael Lowry, Klaus Havelund, and John Penix.
Verification and validation of ai systems that control deep-space spacecraft.
In Z.W. Ras and A. Skowron, editors, Proceedings Tenth International Symposium on Methodologies for Intelligent Systems. Springer-Verlag Lecture Notes in Artificial Intelligence, Vol. 1325, October 1997.

3
Reid Simmons and David Apfelbaum.
A task description language for robot control.
In Proc. International Conference on Intelligent Robots and Systems, Vancouver Canada, October 1998.

4
Brian C. Williams and P. Pandurang Nayak.
A model-based approach to reactive self-configuring systems.
In Proceedings National Conference on Artificial Intelligence, Portland, OR, August 1996.

5
Bwolen Yang, Reid Simmons, Randal Bryant, and David O'Hallaron.
Optimizing symbolic model checking for invariant-rich models.
In Submitted to CAV '99, 1999.

About this document...

This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998).
The translation was performed on 1999-02-21.


Footnotes

...
NASA Ames Research Center, Moffet Field, CA