next up previous
Next: Resource management Up: Issues and Techniques Previous: Stable vs. evolving agents

Modeling of others' goals, actions, and knowledge

In the case of homogeneous agents, it was useful for agents to model the internal states of other agents in order to predict their actions. With heterogeneous agents, the problem of modeling others is much more complex. Now the goals, actions, and domain knowledge of the other agents may also be unknown and thus need modeling.

Without communication, agents are forced to model each other strictly through observation. Huber and Durfee consider a case of coordinated motion control among multiple mobile robots under the assumption that communication is prohibitively expensive [40]. Thus the agents try to deduce each other's plans by observing their actions. In particular, each robot (simulated or real) tries to figure out the destinations of the other robots by watching how they move. Plan recognition of this type is also useful in competitive domains, since knowing an opponent's goals or intentions can make it significantly easier to defeat.

In addition to modeling agents' goals through observation, it is also possible to learn their actions. Wang's OBSERVER system allows an agent to incrementally learn the preconditions and effects of planning actions by observing domain experts [92]. After observing for a time, the agent can then experimentally refine its model by practicing the actions itself.

When modeling other agents, it may be useful to reason not only about what is true and what is false, but also about what is not known. Such reasoning about ignorances is called autoepistemic reasoning. For a theoretical presentation of an autoepistemic reasoning method in MAS, see [59].

Just as RMM is useful for modeling the states of homogeneous agents, it can be used in the heterogeneous scenario as well. Tambe takes it one step further, studying how agents can learn models of teams of agents. In an air combat domain, agents can use RMM to try to deduce an opponents' plan based on its observable actions [85]. For example, a fired missile may not be visible, but the observation of a preparatory maneuver commonly used before firing could indicate that a missile has been launched.

When teams of agents are involved, the situation becomes more complicated. In this case, an opponent's actions may not make sense except in the context of a team maneuver. Then the agent's role within the team must be modeled. Tambe discusses the advantages of team modeling [86].

One reason that modeling other agents might be useful is that agents sometimes depend on each other for achieving their goals. Unlike in game theory where agents can cooperate or not depending on their utility estimation, there may be actions that require cooperation for successful execution. For example, two robots may be needed to successfully push a box, or, as in the pursuit domain, several agents may be needed to capture an opponent. Sichman and Demazeau analyze how the case of conflicting mutual models of different co-dependent agents can arise and be dealt with [79].



next up previous
Next: Resource management Up: Issues and Techniques Previous: Stable vs. evolving agents



Peter Stone
Wed Sep 24 11:54:14 EDT 1997