The UV EA is being evaluated within an SRI experimental framework called the SRI Augmented Reaility Simulator (SARS) . The framework allows our autonomous agent architecture and software to be tested within an entirely simulated environment, on a team of physical robots, or a mixture of the two. The physical robots are three pioneer robots from equipped with GPS, as shown in Figure 10. Initial experiments were carried out in a simulated environment. We then ran the system in an entirely physical world with a team of two cooperating robots searching for and pursuing two independent evader robots. We have also run in environments composed of a combination of physical robots and simulated entities to illustrate scalability and operation with UAVs. The monitoring technology was effective in ensuring robust execution in all environments, and in giving human operators insight into the state and activity of each robot. This insight facilitated debugging and the process of moving from the simulated world to the physical robots as problems were quickly identified.
SARS is specifically designed to simulate robots and UAVs. It produces the same output in terms of sensors, actuators, and resources (battery status, communication range, and so forth). SARS computation and simulation is based on a precise 3D model of the environment. SARS is precise enough that we can mix physical robots moving in the real world with virtual evaders and see the physical robots following a virtual evader -- thus, the name augmented reality. Using SARS, we are able to simulate a team of UGVs moving and/or UAVs flying in a larger space than we have available. The team of UAVs may be larger than our available physical UAVs, as well.
The initial UV EA implementation was evaluated with respect to the usefulness of its output, value of alerts, and real-time performance with realistic data streams. Our analysis shows that all the important situations are alerted during simulated executions, and during tests with actual robots, which are never exactly reproducible.
No-status alerts have proven useful to the human user, as they indicate a hardware or software problem on a robot or the network. Such problems are recognized immediately (after the customizable interval of noncommunication has passed) with the UV EA, but take considerably longer to detect without alerts from the EA. A customizable threshold (which currently defaults to 5 seconds) determines the value of an alert when the robot has not reported its state for a certain interval.
At-goal, Stuck, and Divergent are essential alerts for the autonomous-control agent-navigation system, as well as being useful to a human user who wants to monitor the activity of a single robot. Knowing when the robot has reached a goal point, when it has stopped and is not making progress toward a goal point, and when it is diverging from the planned route are essential to robust autonomous operation. Customizable intervals also control these alerts. Subtleties of the domain must be considered to avoid false alarms. For example, the robot may be paused because of GPS uncertainty and the GPS should be given time to establish connection with satellites. Also, a robot takes time to turn and thus should not be regarded as stuck or divergent until turns and steering adjustments have had time to complete.
Target-visible, Target-lost, and Handoff are useful to both the user and the autonomous controller, particularly when the task is to monitor or pursue a target. The autonomous controller requires immediate awareness of loss of sensor contact, so it can adjust its lower-level behavior or sensor parameters to find the evader. However, such immediate alerts would be unproductive for the human user or the plan-level controller. A customizable interval gives the agent time to relocate the evader, possibly avoiding an alert to the human. These types of alerts are the most time critical in our evaluation domain.
Good tracking of an evader requires recalculating waypoints and orientation at least every second. The UV EA was able to keep up with data inputs, detect occurrences of the types of alerts mentioned within 1 second, and recalculate waypoint and orientation twice per second. These constraints were not difficult to meet on our desktop machines, but the success of the UV EA on the slower processors of the physical robot involved tradeoffs of speed with complexity of waypoint calculation and monitoring. One useful technique is only using the latest state report for an agent when more than one state report has accumulated during one cycle of the monitoring loop. The relative CPU access of the various agents and processes also became important. For example, we had to adjust the time quantum given by the scheduler to our EA processes to ensure that both the process receiving messages and the various PRS agent processes in our EA were executed frequently enough for waypoint recalculation. This problem has been alleviated with more recent upgrades in the onboard computer, but could recur if more computationally expensive projections or alerts are added to the EA.