next up previous
Next: 5 Value of Information Up: Interactive Execution Monitoring of Previous: 3 Monitoring Approach Determined


4 Types of Alerts

Alerts are used to focus the user's attention on an aspect of the situation that the execution aid has determined to be of high value. We discuss the problem of determining the value of information and alerts in later sections, which determines whether and how an alert is presented. An alert may indicate that a response is required, or may just be informative. Many different types of alerts can be given, and it is useful to categorize alerts, thus providing the beginning of a reusable, domain-independent ontology for execution monitoring.

Figure 1 shows the top-level categories for alerts that we identified by starting with a superset of the categories we found useful in our two domains and then generalizing them to cover a broad range of domains. It is assumed that execution is directed by a plan that is shared by the team. These categories generally require different monitoring techniques and different responses to detected problems. For example, adversarial activity could have been a subclass of other relevant classes, but it requires different monitoring techniques. The friendly location data is precise (within the error of GPS) and trustworthy, while adversarial data comes from fusion engines running on data from sensor networks. The adversarial data is highly uncertain, may come at significantly different rates, and generally will have different algorithms for determining the value of information, as adversarial entities are actively trying to thwart your plan and perhaps are trying to kill you.

The top-level categories in our ontology generally differ along the following dimensions that are important to monitoring:

The different monitoring techniques for each category are often domain specific, and can even be task specific in some cases, adapting the monitoring as tasks in the plan are executed. Our monitoring framework integrates these various techniques and then uses the concept of value of an alert to control interaction with the user.

We briefly discuss each of the top-level categories. We have not provided the next lower level of the ontology because the space of possibilities is large, with domain-specific concerns important. For example, adversarial alerts could include subclasses for fixed or mobile adversaries, for size and capabilities of the adversarial team, for an alliance or tightly coordinated adversarial team, for adversarial intent or plan, and so forth. Later in the paper, we describe how alerts given by our implemented execution assistants (EAs) fit into these categories.

Plan constraints. Plans provide most of the expectations of how execution should proceed, so this category has the richest set of alerts. A fairly large hierarchical ontology could be produced to describe different types of alerts on plan constraints. Gil and Blythe [16] present a domain-independent ontology for representing plans and plan evaluations. Each concept in their evaluation ontology could be a source of an alert when the evaluation becomes sufficiently important to the user. Plans in real-world domains are often hierarchical, so constraints from different levels or layers may be violated. It may be desirable to customize alerts based on the hierarchical level of the plan constraint in question. To indicate the range of possible alerts in this category, we list a few common examples:

Policy constraints. Most real-world domains have persistent constraints such as policies or rules of engagement that must not be violated. While these could be considered as part of the plan by representing them as maintenance conditions that extend over the entire plan, they are significantly different in practice and are often monitored by different techniques, because they may require additional domain knowledge or specialized monitoring algorithms, which must be invoked efficiently. For example, in our domains, we never want our human team members to be killed or our robots destroyed. Therefore, we monitor the physical safety of our agents at all times and give alerts to the user when some agent is in danger. Dangers from adversarial agents are covered in their own category. However, the system should also alert the user to threats from team members (fratricide) and from the local agent's own actions (e.g., a robot's battery running low).

New opportunities. Even though the current plan can still be executed without change, it may be possible to generate a better plan for the current situation as new opportunities arise. Determining if an execution-time update to the world state permits a more desirable plan is a difficult problem in general, similar to generating a new plan for the new situation. However, in real-world domains, there are often methods for detecting new opportunities that indicate a plan revision might be cost effective. For example, certain key features (such as ``pop-up targets'' in military domains) can represent new opportunities, and there are often encoded standard operating procedures (SOPs) that can be invoked when triggered by the current situation to improve the plan and/or react to events. Because our monitoring is interactive, we can avoid the difficult decision of whether to search for a better plan by alerting the user of high-value opportunities and relying on the user to judge the best response.

Adversarial activity. This category assumes that our team members are operating in environments with adversaries that are trying to actively thwart team plans. When adversaries are dangerous (e.g., worthy human opponents), reacting to their detected activity becomes a top priority and, in our experience, merits customized monitoring algorithms. Recognizing immediate threats to a team member's physical existence or to the accomplishment of the plan is obviously important. In addition, information that allows the human to discern patterns or recognize the opponent's plan or intent is valuable. Our EAs recognize physical threats and adversarial activity not expected by the plan, but do not currently perform automated plan or intent recognition on data about adversaries. Both automated plan recognition [22] and inference of adversarial intent [13,4] are active areas of research. If algorithms are developed that reliably recognize adversarial plans or intent while using acceptable computational resources, they could easily be invoked within our monitoring framework.

Projections. Even though the current plan can still be executed without change for the time being, it may be possible to predict that a future failure of plan or global constraints will occur, with varying degrees of certainty. For example, suppose the plan requires a robot to move to location X by time T, but the robot is getting progressively more behind schedule or more off course. At some point before T, the system can predict with acceptable certainty that this location constraint will be violated and alert the user, who may revise the plan. In addition, new opportunities and probable adversarial activity could be projected. Projection/simulation algorithms can be computationally expensive, so the execution monitor must adjust its calculation of projections to match available resources and constraints.

Contingency plans. The plan may specify contingency plans or subplans, which are to be invoked when certain, specified conditions arise. The execution monitor should monitor these conditions and alert the user when a contingency plan has been triggered. The system can also notify all team members automatically if the user decides to switch execution to a contingency plan. Another desirable alert in some domains might be a suggestion by the system that new contingency plans should be generated for certain situations as events unfold in an unexpected manner. Our EAs monitor the triggering of contingencies but do not suggest their generation.

System problems. Depending on the domain, the user may want to be alerted of problems with incoming data streams or in the functioning of the execution assistant itself. For example, if no data is arriving from the sensors, or over the network from other team members, this may be crucial to helping the user interpret the situation and system alerts.

Reporting requirements. One of our basic assumptions is that the human user has experience and knowledge that are not modeled within the system. Therefore, the system cannot always recognize how a new piece of information will affect plan execution. Some information that does not trigger the above alerts might still be valuable to the user. The system is given reporting requirements that allow it to recognize such information. One generally useful reporting requirement would be execution status, so the user can quickly determine that execution is proceeding as planned. Reporting requirements may take any number of forms, as appropriate to the domain. The comments about recognizing new opportunities apply here -- domains might specify requirements as SOPs, key features, declarative statements, or heuristic algorithms. Several things fall under this category, such as information that reduces uncertainty and/or indicates that the plan is executing as expected. As another example, a robot might be told to immediately report any murder or fire it witnesses while executing its planned tasks.


next up previous
Next: 5 Value of Information Up: Interactive Execution Monitoring of Previous: 3 Monitoring Approach Determined
Pauline Berry 2003-03-18