There has been great interest in plan generation algorithms, but less work on using plans to dynamically control execution. Much execution monitoring work describes monitors in specific domains, so we first characterize the domain-independent challenges of monitoring agent teams.
There are several ``universal'' challenges of execution monitoring that are not particular to dynamic, data-rich domains or interactive monitoring. These issues should be part of a monitoring ontology and are addressed in our EAs, but we do not stress them in our discussion as they are discussed elsewhere [22,21,27,30,48,7,10]. The issues include the following:
We are concerned with execution monitoring of agent teams, where team members may be any combination of humans and/or machines. We concentrate on the challenges that are unique to interactive execution aids in dynamic domains, and categorize these challenges into the following four categories.
Adaptivity. The output of an execution assistant must meet human requirements and preferences for monitoring behavior, providing high-value alerts and suggestions. As in all execution monitoring, sensitivity is crucial, but in interactive monitoring the sensitivity of the monitor must also be adaptable. In addition to adapting to user preferences, the analysis done by an execution assistant and its level of autonomy must be adjustable to operational tempo and incoming data rate. The system should ideally adapt its output to the user's capabilities and cognitive load.
Plan and situation-specific monitoring. Coordinating the activities of many teams members requires a plan shared by the team. We will assume that plans contain partial orders of tasks for each team member, as well as any necessary coordinating instructions and commitments . The plan representation also encodes some of the expected outcomes (effects) of plan execution, so that execution aids can detect deviations. The analysis done by an execution assistant and any suggested responses must depend on the plan and situation to be effective, because events often cause a problem for some plans but not for others. We found that monitoring algorithms must often be tailored to the specific tasks that compose plans. To facilitate interaction, the plan representations must be understandable by both humans and the system, although the human might be aided by multiple plan views of the internal representation in a user-friendly interface.
Reactivity. Any execution monitor must react to events and uncertainty introduced by the environment. In dynamic, data-rich domains, particular care must be taken to ensure that the system remains reactive with high rates of incoming information and fast decision cycles. Resources are not generally available to perform all desired analyses for every input -- for example, projecting future problems with multiple simulation runs or searching for better plans may be computationally expensive. There are often no obvious boundaries to the types of support an execution aid might provide in a real-world domain. Therefore, a balance must be struck between the capabilities provided and resources used. A few examples show the types of issues that arise in practice. In our first domain, only coarse terrain reasoning was used, as projections using fine-grained terrain data were computationally expensive. In our robot domain, we had to adjust the time quanta assigned to processes by the scheduler so that our monitoring processes were executed at least every second. Finally, in domains with dangerous or intelligent adversaries, reacting to their detected activity becomes a high priority. There has been considerable research on guaranteeing real-time response [2,26], but the tradeoffs are generally different in every application and are usually a critical aspect of the design of an execution assistant.
High-value, user-appropriate alerts. Alerting on every occurrence of a monitored condition that is possibly a problem is relatively easy; however, the user would quickly ignore any assistant that gave so many alerts. The challenge is to not give false alarms and to not inundate the user with unwanted or redundant alerts. The system must estimate the utility of information and alerts to the user, give only high-value alerts, and present the alerts in a manner appropriate to their value and the user's cognitive state. We found that a common challenge is to avoid cascading alerts as events get progressively further away from expectations along any of a number of dimensions (such as time, space, and resource availability). Another challenge that we will not discuss in depth is aggregating lower-level data (e.g., sensor fusion), which can reduce the number of alerts by consolidating inputs. Estimates of the value of alerts can be used to adjust alerting behavior to the user's cognitive load.
Interactive alerting during execution naturally leads to the equally important and challenging topic of human directing of responses and plan modifications. Our monitoring technologies have been used in continuous planning frameworks [48,30], but we will limit the scope of this paper to interactive alerting. We briefly mention some ongoing research on this topic that we either are using or plan to use in conjunction with our execution aids.
Agent systems that interact with humans are an active area of research, and the issues are discussed in the literature [31,11,37]. Myers and Morley , for example, describe the Taskable Reactive Agent Communities (TRAC) framework that supports human supervisors in directing agent teams. They address topics such as adjustable agent autonomy, permission requirements, consultation requirements, and the ability to communicate strategy preferences as guidance. TRAC is complementary to the execution monitoring described in this paper.
Another active research area that fits naturally with our execution monitoring approach is theories of collaboration. In fact, we use the SharedPlans theory of collaboration  in our second domain  to direct agents in conjunction with the execution monitor. This theory models the elements of working together in a team as well as the levels of partial information associated with states of an evolving shared plan. Central to the theory of SharedPlans is the notion that agents should be committed to providing helpful support to team members. Within the theory, this notion of helpful behavior has been formally defined . The work on collaboration is complimentary with our monitoring approach, but will not be discussed in detail.