next up previous
Next: 9 Conclusions Up: Interactive Execution Monitoring of Previous: 7.5 UV-Robotics: Evaluation

8 Related Work

Plan generation has received a lot of attention recently, but rarely are the plans used to control and monitor execution. Even more rarely are plans monitored that involve the activity of hundreds of agents requiring tight coordination. Previous work on execution monitoring has focused on models where the executor performs the planned actions (e.g., a robot controller) and usually has direct access to internal state information. In the SUO domain, most actions are performed by external agents, usually humans, and the monitor has no access to the state of its executing agents. Such indirect execution requires different monitoring techniques, as the executor must use incoming messages to determine the status of agents and activities and whether actions have been initiated or completed. The Continuous Planning and Execution Framework [30] has addressed the indirect execution problem, and our system builds on its ideas. However, our domain requires monitoring of many more constraints with greater time sensitivity. We have much higher rates of incoming data, and must customize monitoring of each action to generate appropriate, high-value alerts.

Robot designers have often avoided the plan representations used by the AI plan-generation community because of their restrictive assumptions [35,1]. Both our domains required an expressive plan representation, and our combination of the Act formalism with a hierarchical, object-oriented mission model proved sufficiently expressive, providing a rich set of goal modalities for encoding activity, including notions of achievement, maintenance, testing, conclusion, and waiting.

The SAM system [23] at ISI addresses a similar problem: automated pilot agents on a battlefield. SAM has direct access to its local automated agent and much lower incoming data rates than the EA. It addresses the difficult problem of plan recognition (of the plans of other friendly agents). Because humans are not involved, SAM does not need to produce alerts tailored to human cognitive capabilities. Experiments with SAM showed that distributed monitoring outperformed centralized monitoring while using simpler algorithms. Our EAs and SAIM use such a distributed design, building on these insights.

More recent work at ISI has produced a monitoring agent named OVERSEER [22], which also addresses a problem similar to ours: many geographically distributed team members with a coordinating plan in a dynamic environment. address the problem of modeling the value of information to the user. OVERSEER does not use the report-based monitoring approach adopted by our EAs, because it must rely on unmodifiable legacy agents and does not have sufficient bandwidth and reliability in communication. A detailed analysis is given in Section 3.

NASA's Remote Agent on Deep Space One [21,27] does autonomous execution monitoring on a spacecraft. Our domains have many of the same requirements as NASA's, including the core requirements of concurrent temporal processes and interacting recoveries. However, NASA's remote agent is fully automated, which places a heavier burden on the module that generates plans and responses, but alleviates the burden of having to address human interaction issues such as those considered in VOA. Monitoring algorithms are not described in detail, but are based on a procedural executive, which we assume is similar to our procedural reactive control system. In NASA's domain, the ``agents'' are mechanical devices onboard the spacecraft, and their behaviors have been formally modeled. Our agents include humans, whose behaviors are not easily modeled, so our EAs estimate the value of alerts as they interact with a human decision maker, who ultimately is responsible for the control decisions.

Work on rationale-based monitoring [35,41] addressed the problem of monitoring the world during the plan generation process (in causal-link planners) to see if events invalidate the plan being generated. They monitor subgoals, preconditions, usability conditions, and user preferences. All these are monitored in our framework when plans are executed, and our EAs have additional capabilities, such as monitoring policy constraints and applying mission-specific monitoring methods. This rationale-based work does not address time-critical monitoring during execution time, monitoring large volumes of incoming data, or the problem of alerting users without overwhelming them.

Doyle [9] describes a technique to focus the user's attention on anomalous system behavior, particularly sensor behavior. This work would be applicable within the lowest layer in our robotics control module. It uses causal modeling to understand the ``normal'' behavior of a sensor. Anomaly detection is based on measures of causal distance and distance from normal behavior. The distance measures are not related to the plan and its goals/actions; instead they measure deviation from typical behavior. The user still has to relate the reported sensor anomaly to its higher-level effects, such as a threat to plan or action execution. This work provides a monitoring technique for specific sensor and system types that could easily be incorporated in our monitoring framework. The resulting anomaly detection might give low-level alerts or be a contributory factor in the reasoning process for higher-level alert classes.

The Phoenix system uses the concept of a plan envelope [18] to represent the a priori expectations of an action's progress. Envelopes are used when an action executes over time and can be interrupted and altered during execution. The envelope captures the range of possible performance of an action during successful execution. During execution, the actual performance of the system is recorded and, if it deviates from the predefined envelope, a possible failure is detected. This concept provides a useful monitoring technique for specific alert-types, particularly those concerning actions that consume a variable amount of resources over time. Envelopes can also identify when an action is performing better than required allowing opportunistic alerts. Envelopes could easily be incorporated in our monitoring framework as an additional monitoring technique, and could be useful at the higher levels in both our domains.

The SUO EA provides a capability that does not currently exist, because there is no machine-understandable representation of the plan on the battlefield. Currently, small-unit warfighters must monitor all incoming information for relevance, with manual notification of other team members. The SUO EA also improves on next-generation Army systems such as FBCB2 (Force XXI Battle Command Brigade and Below) [14]. Unlike FBCB2, the EA alerts only on important changes, can automatically update the areas to be monitored as the plan is executed, can dynamically change the force structure, and can alert the user to many issues that are not monitored in other systems, such as fratricide risks, triggering of contingencies, and schedule, coordination and positional deviations from the plan.


next up previous
Next: 9 Conclusions Up: Interactive Execution Monitoring of Previous: 7.5 UV-Robotics: Evaluation
Pauline Berry 2003-03-18