Before exploring the general multiagent scenario involving heterogeneous non-communicating agents, consider how this scenario can be instantiated in the pursuit domain. As in the previous scenario, the predators are controlled by separate agents. But they are no longer necessarily identical agents: their goals, actions and domain knowledge may differ. In addition, the prey, which inherently has goals different from those of the predators, can now be modeled as an agent. The pursuit domain with heterogeneous agents is shown in Figure 8.
Figure 8: The pursuit domain with heterogeneous agents. Goals and actions may differ among agents. Now the prey may also be modeled as an agent.
Haynes and colleagues have done various studies with heterogeneous agents in the pursuit domain. They have evolved teams of predators, equipped predators with case bases, and competitively evolved the predators and the prey.
First, Haynes et al. use genetic programming (GP) to evolve teams of four predators . Rather than evolving predator agents in a single evolutionary pool and then combining them into teams to test performance, each individual in the population is actually a team of four agents already specifically assigned to different predators. Thus the predators can evolve to cooperate. This co-evolution of teammates is one possible way around the absence of communication in a domain. In place of communicating planned actions to each other, the predators can evolve to know, or at least act as if knowing, each other's future actions.
In a separate study, Haynes et al. use case-based reasoning to allow predators to learn to cooperate . They begin with identical agents controlling each of the predators. The predators move simultaneously to their closest capture positions. But because predators that try to occupy the same position all remain stationary, cases of deadlock arise. When deadlock occurs, the agents store the negative case so as to avoid it in the future, and they try different actions. Keeping track of which agents act in which way for given deadlock situations, the predators build up different case bases and thus become heterogeneous agents. Over time, the predators learn to stay out of each other's way while approaching the prey.
Finally, Haynes and Sen explore the possibility of evolving both the predators and the prey so that they all try to improve their behaviors . Working in a toroidal world and starting with predator behaviors such as Korf's greedy heuristic and their own evolved GP predators, they then evolve the prey to behave more effectively than randomly. Although one might think that continuing this process would lead to repeated improvement of the predator and prey behaviors with no convergence, a prey behavior emerges that always succeeds: the prey simply moves in a constant straight line. Even when allowed to re-adjust to the ``linear'' prey behavior, the predators are unable to reliably capture the prey. Haynes and Sen conclude that Korf's greedy solution to the pursuit domain relies on random prey movement which guarantees locality of movement. Although there may yet be greedy solutions that can deal with different types of prey behavior, they have not yet been discovered. Thus the predator domain retains value for researchers in MAS.
Although Haynes and Sen convince the reader that the pursuit domain is still worth studying , the co-evolutionary results are less than satisfying. As mentioned above, one would intuitively expect the predators to be able to adapt to the linearly moving prey. For example, since they operate in a toroidal world, a single predator could place itself in the prey's line of movement and remain still. Then the remaining predators could surround the prey at their leisure. The fact that the predators are unable to re-evolve to find such a solution suggests that either the predator evolution is not performed optimally, or slightly more ``capable'' agents (i.e. agents able to reason more about past world states) would lead to a more interesting study. Nevertheless, the study of competitive co-evolution in the pursuit domain started by Haynes and Sen is an intriguing open issue.