next up previous
Next: 7.3 UV-Robotics: Execution Monitoring Up: 7 Monitoring Robot Teams Previous: 7.1 UV-Robotics: Problem Description


7.2 UV-Robotics: Architecture

The SRI UV robot architecture is based on several years of research at SRI into intelligent reactive control, planning, negotiation, and robot motion control [46,29,47,6,24]. It is similar to systems like SAFER [19] and SRTA [42] in its ability to deal with multiple goals at once and evaluate when to discard goals. Figure 9 shows our Multi-Level Agent Adaptation (MLAA) architecture. Clearly, monitoring is pervasive and serves each layer in the architecture as well as the user (not shown).

Figure 9: Multi-level Agent Adaptation Architecture.
\begin{figure}\centerline{\includegraphics[bb=36 36 756 576,width=5.5in] {psfiles/UCAV-arch.eps}}\end{figure}

The coordination module receives goal requests from the human commander or other agents. The agent participates in a negotiation process to determine its role in achieving the goal. During negotiation, the agent consults the strategic planner to create a plan, or plan segment (referred to as a recipe), and assess the recipe's viability given current commitments. If the negotiation process results in the goal and its recipe being accepted, the EA Manager (see Figure 3) instantiates the recipe and initiates its execution. The Plan Initializer also creates monitoring sentinels for use by the EA to detect deviation from the recipe during execution. The execution of a recipe involves activation of tasks that must be blended with other active tasks to maximize the satisfaction of multiple goals. For example, if the robot needs to reach a waypoint by a set time, take a picture of a location nearby, and also remain concealed, the task blender modifies the path planner at runtime to achieve all three tasks. Finally, the lowest layer in the architecture is the interface between the tasking architecture and the physical, or simulated, robot controller.

The monitoring in Figure 9 is done by the UV EA, which was created by using the architecture and representations of the SUO EA. The modular design of the SUO EA made this adaptation straightforward. The architecture and internal EA agents depicted in Figure 3 were used with little modification, as were the plan representation and the techniques for monitoring plans, applying VOI and VOA calculations, and issuing alerts. Our implementation of an initial UV EA (using code from the SUO EA) was done in about one person-week, an impressive result given the complexity of the task. The implementation included connecting to new data sources, parsing their messages, determining and implementing the most valuable monitoring algorithms, integrating with the plans and missions already defined, and writing domain-specific VOI/VOA algorithms. Achieving some missions requires recalculating waypoints at least every second while using only 20% of the CPU, so we had to trade off speed and complexity in both waypoint calculation and monitoring. The initial version of the UV EA detected the first five types of alerts listed in Section 7.4.


next up previous
Next: 7.3 UV-Robotics: Execution Monitoring Up: 7 Monitoring Robot Teams Previous: 7.1 UV-Robotics: Problem Description
Pauline Berry 2003-03-18