Explaining Counterexamples: Causal Analysis and Comparison of Transition Sequences


One of the chief advantages of model checking is the production of counterexamples demonstrating that a system does not satisfy a specification. However, even with a counterexample it may require a great deal of human effort to understand what aspect of the source code or design is faulty. The actual cause of the error must be located in the counterexample trace. If the error is one of omission, the relevant absence must be located, which may be even more difficult. In this proposal, I describe an automated approach to explaining an error in a system. After discovering a single counterexample demonstrating the presence of an error, a model checker can produce other traces that are relevant to the error: similar executions in which the error does not occur (called positives), and similar executions in which the error also occurs (called negatives). By analyzing multiple traces, the causes of the error can be separated from the merely accidental aspects of the original counterexample. In particular, the differences between the positives and the original counterexample are the causes of the error. I have implemented an explicit-state approach to error analysis for safety properties. For my thesis work, I propose to extend this work in a number of ways: (1) develop and implement efficient symbolic algorithms for error explanation, (2) in particular, improve the theoretical and practical basis of the comparison of executions, (3) extend error explanation to more complex types of counterexamples and (4) apply error explanation to hierarchical systems and systems specified by simulation properties. Finally, I propose to use quantitative measurements and limited user testing to validate the usefulness of error explanation in model checking.

Back to Alex's Page
Comments & questions to agroce@gmail.com.