In interactive, dynamic, real-world domains like SUO, we cannot model all alternatives, their payoffs, nor all the other knowledge and probabilities required with enough precision to compute the ``theoretical'' VOI and VOA. Much knowledge about VOI resides only with human experts, and even they might have different preferences or opinions about VOI. For example, in the SUO domain, the user might be concerned about the public-relations effects of how the plan execution is reported in the international media. It is precisely because humans have knowledge not modeled in the system that we want our execution assistants to be interactive. In such realistic domains, there are generally no obvious boundaries to the types of support the system should provide, and no precisely defined evaluation functions or payoff matrixes. Thus, Weinberger's theory and formal techniques for computing the value of information  cannot be applied. Horty and Pollack  develop some of the foundations for a theory of rational choice that involves estimating the cost of decisions in the context of plans. Their approach comes closer to addressing our concerns. However, determining costs and utilities of actions will continue to require human judgment in many domains, especially if human lives are being put at risk.
Therefore, we developed algorithms that heuristically estimate VOI using domain knowledge, although quantitative VOI functions can easily be used in our framework. The inputs to our algorithms are described in Section 5.3. These domain-specific algorithms are, and must be, easily customized and tuned for user preferences, as well as the situation. They are invoked in domain-independent ways for a variety of purposes by the monitoring framework, and were developed with feedback from domain experts. We believe it is feasible to use machine-learning techniques to replace or supplement hand-coded heuristics for VOI/VOA estimation and/or the user preferences which affect it, but this was not explored.
VOI and VOA are computed qualitatively in our domains, using several domain-specific quantitative measures in the qualitative reasoning process. Issuing an alert is a discrete event, and generally there are a small number of options for presenting an alert. Therefore, estimating VOA is primarily a problem of categorizing the potential alert into a small number of alert presentation types or modalities. We need to determine when the VOA crosses thresholds (defined by the VOI/VOA specification) indicating, for example, that it is valuable to issue an alert, or that the alert should be issued as high-priority. In our framework, the thresholds are customizable by the user and can be mission specific, so they can change automatically as different missions in the plan are executed. The VOI algorithms also determine what information to include in an alert.
Different alert presentations are handled by assigning a qualitative priority to each alert. For example, our SUO EA divides alerts by VOA into four equivalence classes for levels of priority, which were already defined in the SUO domain. Each priority is presented differently to the user, from using different modalities to simply using different colors or sizes of text or graphics. Currently, we use three priority levels in the robotics domain, but may add more in the future as collaborating team members make more use of the EA. These priority levels can be used to adjust alerting behavior to the user's cognitive load. For example, during fast-paced operations, only the highest-priority alerts could be presented.
There are several reasons for preferring qualitative reasoning, and we draw on Forbus's work in describing the advantages [8,12]. Qualitative models fit perfectly with making decisions, which are discrete events, and effectively divide continuous properties at their important transitions. Thus, changes in qualitative value generally indicate important changes in the underlying situation. Qualitative models also facilitate communication because they are built on the reasoning of human experts and thus are similar to people's understanding. For example, the priority levels used in our VOA algorithms have long been named and defined in the military. Qualitative reasoning is important as a framework for integrating the results of various qualitative computations in a way humans can understand. Finally, the precision of quantitative models can be a serious weakness if the underlying models do not accurately reflect the real-world situation. Precise data can lead to precise but incorrect results in a low-accuracy model, and the precise results can lead to a false sense of security.
These advantages of qualitative reasoning are apparent in both common sense and military reasoning. Common sense reasoning about continuous quantities is often done qualitatively. The continuous value is of interest only when a different action or decision is required. For example, you can ignore your fuel gauge when driving once you have decided whether or not you must refuel before reaching your destination. In addition to the priorities already mentioned, the military quantizes many continuous properties used to describe terrain in ways that are relevant to military operations, creating phase lines, decision points, named areas of interest, key terrain avenues of approach, and so forth. The SUO EA incorporates these quantizations to reason about terrain's influence on VOI and VOA and to effectively communicate information in alerts, just as the military has used them for years to facilitate communication, collaboration, and decision making.