next up previous
Next: 6.2 SUO Approach Up: 6 Implementing Execution Monitors Previous: 6 Implementing Execution Monitors

6.1 SUO Problem Description

Small unit operations in the military involve hundreds of mobile, geographically distributed soldiers and vehicles cooperatively executing fast-paced actions against an unpredictable adversary. Computational support is bandwidth restricted and must use lightweight and portable devices. Currently, the planning decisions are all made by humans, and the plans are not machine understandable.

We implemented the SUO EA as part of a larger system: the Situation Awareness and Information Management (SAIM) system, which distributes timely, consistent situation data to all friendly agents. SAIM uses new technologies to demonstrate a new concept of automated support (described below) in the SUO domain. We assume many small teams of agents (human, vehicles, and eventually robots), separated and dispersed throughout a large space, operating in concert to achieve goals. We assume that each agent has equipment providing robust geolocation (GPS), computing, and communication capabilities. SAIM also assumes an unpredictable adversary, fast-paced action, and a rich population of sensors controlled by cooperating team members.

The key innovations of SAIM, in addition to the EA, are a self-organizing peer-to-peer information architecture and forward fusion and tracking. Fusion of information and tracking is distributed and done close to the source to minimize latency, bandwidth requirements, and ambiguity. Adjudication maintains consistency of distributed databases. The information architecture supports ad hoc information dissemination based on multicast groups centered on mission, geography, or command. Self-elected servers provide the same robustness for information dissemination that the peer-to-peer network brings to the transport layer.

SAIM provides large volumes of geolocation data -- too much information for a human controller to monitor, particularly in high-stress situations. The EA alleviates this problem by using a machine-understandable plan to filter the information from SAIM and alert the user when events threaten the user or the execution of the plan. A plan-aware, situation-aware, action-specific EA can alert appropriately for the situation, thus improving decision making by enabling hands-free operations, reducing the need for human monitoring, increasing the amount of relevant information monitored, and prompting the user when action is required.

The complexities of plans, the number of agents, and the volume of data pose a challenge to existing execution-monitoring techniques. Unlike a lot of AI planning work, particularly in robotics, most actions in our domain are performed by external agents, mostly humans, and the monitor has no access to the state of executing agents. Status information must be obtained from external inputs.

We focus on the problem of alerting human users when the situation requires attention; we assume that the human will modify the plan as needed. This was done for several reasons. First, the users are unwilling to cede decision making to a machine, so we first develop trust by giving useful alerts, a capability well suited for automation if the plan can be represented with enough fidelity, and something that provides obvious value in dealing with the information glut. Second, mistakes can be a matter of life and death, so systems must be verifiably robust before they are given decision-making power. Human decision makers must take imperfect information into account, including reports from sensor networks, other humans, and execution assistants. Third, demonstrating the utility of automated, plan-based monitoring in this large and complex domain is likely to facilitate future acceptance by users of plan-related automation.

Figure 2: Echelons in the command hierarchy with EAs.
\begin{figure}\begin{center}
\begin{tabular}{\vert l\vert l\vert l\vert}
\hline
...
...& PLT & about 30 \\
\hline
\end{tabular}\vspace*{.1in}
\end{center}\end{figure}

Execution monitoring requires coordination over multiple echelons (levels in the hierarchy), so that users know what their subordinates are doing. Figure 2 shows the echelons for which we have demonstrated the EA. Multiple agents at each echelon must coordinate fast-paced activities over a wide area in real time. Our task requires the solution of three difficult problems: handling the large volume of incoming information, developing a sufficiently rich plan representation for capturing tactical Army plans, and determining when to alert the user.

As mentioned before, the EA must give only high-value alerts to be useful. For example, once a unit is out of position or late, the system must recognize both the import of this condition and when the situation has changed sufficiently to issue another alert, without issuing too many alerts. Consider the seemingly simple example of a plan specifying that a squad of 10 agents should move to Objective Golf at 0700. What is the location of the squad? An obvious solution is to compute the centroid of each member's location. However, no one is near the centroid if all members are in a large semicircle with the centroid at the center (this situation arises when the squad follows a road around a sweeping curve). If one member is now immobile with his GPS still broadcasting, the centroid may be seriously inaccurate. Does the centroid need to be near Golf, or is one member near Golf sufficient, or must all members be near Golf? It depends on the mission (task) and situation. If the mission is to observe a valley, one member is sufficient, but we might want all members for an attack. Our solution is to use mission-specific algorithms (specified in the mission model described in Section 6.5) for reasoning about the location of units.

The EA must avoid cascading alerts as events get progressively further away from expectations along any of a number of dimensions (such as time, space, and resource availability). In the above example, how close in time to 0700 should the squad be before there is a problem with achieving the plan's objectives? Similarly, how close in distance to Golf? Again, the time and distance thresholds that indicate a problem depend on the mission and situation. A human uses his background world knowledge to quickly determine if a delay affects the plan, but execution aids must have much knowledge encoded to do this. These problems become exacerbated as the plans and missions become more complex. Detecting friendly-fire (fratricide) risks poses even more difficult issues, because there are typically many friendly units in close proximity.


next up previous
Next: 6.2 SUO Approach Up: 6 Implementing Execution Monitors Previous: 6 Implementing Execution Monitors
Pauline Berry 2003-03-18