next up previous
Next: 4 Types of Alerts Up: Interactive Execution Monitoring of Previous: 2 Interactive Monitoring Challenges


3 Monitoring Approach Determined by Domain Features

The domain features and monitoring challenges with which we are concerned are common in many domains in addition to robot teams and small unit operations (SUO). For example, they occur in the monitoring of spacecraft [5,27] and monitoring in medicine [7] for ICU patients or for anesthesia. These domains are also data rich -- medical clinicians have ``difficulty in using the vast amount of information that can be presented to them on current monitoring systems'' [43,7]. In particular, the problem of flooding human users with false or redundant alarms is ubiquitous in medical monitoring [25,39]. One study found that 86% of alarms in a pediatric ICU were false alarms [40]. False alarms distract humans from more important tasks. Such a false alarm rate would most likely make the monitor useless in fast-paced operations. Research in these domains has concentrated on automated monitoring, with little or no emphasis on interactive monitoring.

While the challenges described in the previous section apply to all interactive, dynamic domains, the properties of individual domains influence their solutions. One brief case study shows how the features of the communication system and the use of legacy agents can indicate a different monitoring approach for two similar problems. Kaminka et al. [22] address a problem similar to ours: many geographically distributed team members with a coordinating plan in a dynamic environment. They use an approach based on applying plan-recognition techniques to the observable actions of team members, rather than communicating state information among team members, which they refer to as report-based monitoring.

They list four problems with report-based monitoring [22]: (1) intrusive modifications are required to legacy agents to report state, (2) the necessary state information changes with the monitoring task, (3) the monitored agents and the communication lines have heavy computational and bandwidth burdens, and (4) it assumes completely reliable and secure communication between the team members. They say that (1) is their main concern, with (3) being next most important.

In both of our domains, we use report-based monitoring. Our agents already report their state or can easily be modified to do so, for example, by attaching Global Positioning (GPS) devices. Our monitoring tasks can be performed using the reports already available, although one can imagine adding further functionality that would change the reporting requirements. In our first domain, reports are distributed by the Situation Awareness and Information Management (SAIM) system on a high-bandwidth network. SAIM uses novel peer-to-peer (P2P) dissemination algorithms and forward fusion of sensor reports, greatly reducing bandwidth requirements. P2P is fault tolerant, allowing any node to be a server. Dissemination is based on an agent's current task, geographic location, and relationship in the hierarchical organization of team members.

In summary, report-based monitoring works in our domains because we rely less on unmodifiable legacy agents, have more reliable communications, and have enough bandwidth available with our network and dissemination algorithms. Kaminka's approach provides more automated support, but we must address the problem of modeling the value of information to the user. If Kaminka's system was extended to interact with humans, we believe our alert ontology and techniques for avoiding operator overload would be applicable, whether alerts come from sources based on plan-recognition or from reports. Because we rely on humans as being ultimately responsible for team behavior, we do not require as much state information nor complete reliability in communication. Unreliable communication will degrade monitoring performance, but the human decision maker must take missing inputs into account when making a decision. The execution assistant can monitor communications and alert the human to possible communications problems.

Figure 1: Top-level categories in alert ontology.
\begin{figure}\begin{center}
\begin{tabular}{\vert l\vert}
\hline
Plan constrain...
...irement triggered\\
\hline
\end{tabular}\vspace*{.1in}
\end{center}\end{figure}


next up previous
Next: 4 Types of Alerts Up: Interactive Execution Monitoring of Previous: 2 Interactive Monitoring Challenges
Pauline Berry 2003-03-18