As described above, the VOI and VOA algorithms will generally be heuristic, domain-specific, and user customizable. Here we identify most of the inputs that will be applicable to most interactive, dynamic domains. We started with a superset of the VOI criteria we found useful in our two domains and then generalized them to be domain independent. (The properties of the user listed below are estimates from system models of the user, as the user's mental state is not accessible.)
The plan provides several VOI criteria: the plan may provide explicit and implicit decision points, high-value places, times, team members, and so forth. The value of a task, constraint, adversarial action, or team member is often determined by the plan structure and plan annotations. The tasks in the plan can invoke task-specific VOI algorithms within our monitoring framework, as described in Section 6. Domain policies (or specialized reasoners that implement them) and reporting requirements should provide the knowledge necessary to determine the value of alerts about various types of constraint violations and reports. For example, in our domains, we monitor the physical safety of our agents. Alerts on life-threatening situations have the highest priority.
We noted that VOI tends to zero to the extent the user is already aware of the information. Thus, determining VOI must access the current view of the situation to determine if arriving reports offer new information or simply confirm the existing view. In data-rich domains, we assume that the execution aid may have a more detailed description of the situation than the user (for the aspects of the situation that are described by incoming data), because the user may be performing other tasks and monitoring the situation only when he is alerted by the EA. Therefore, the value of alerting the user will depend on how much the new information differs from the user's last situation update, even if the system has more recent data that differs only slightly from the new information.
Ideally, we would like to model the user's cognitive load, and give lower values to noncritical alerts when the user is consumed with addressing more critical aspects of the situation. Similarly, we do not want to overload the system's computational resources or ability to remain reactive, so the value of certain information may depend on the time or resources available to analyze it.
When determining the value of information about adversaries, it is often useful to compare developing patterns to any information about the adversary's plans or tendencies, which could be obtained from human intelligence analysts or generated by plan-recognition or pattern-matching algorithms. As mentioned above, information that reduces uncertainty is valuable in domains with high uncertainty and active adversaries. VOI can be estimated if we have a characterization of the uncertainty present in our current view of the situation.
The age of information is also a factor in VOI -- outdated reports may have zero value if newer information has already arrived. When modeling the user's awareness, elapsed time is a factor. The user will be aware of alerts issued in the last few minutes, but may no longer be aware of something that was brought to her attention yesterday or last week. Thus, the value of a proposed new alert may increase with elapsed time since a similar alert was issued.
When a variety of sources of information exists, the source is a factor in VOI. Often, different information sources have inherently different levels of certainty, authority, or importance. For example, the SUO EA accepts reports from both human observers and automated sensors. An EA with such inputs might want to weigh human observations differently depending on the human and the situation. In later sections on our implemented EAs, we describe our domain-specific VOI/VOA algorithms, which have inputs corresponding to the inputs listed above.