In this section, we will introduce our formalism. The purpose of the formalism is not directly to specify the workings of the agent's cognitive machinery. Instead, its purpose is to construct ``principled characterizations of interactions between agents and their environments to guide explanation and design'' . The formalism, in other words, describes an agent's embodied activities in a particular environment. Having characterized the dynamics of those activities, it becomes possible to design suitable machinery. As a matter of principle, we want to design the simplest possible machinery that is consistent with a given pattern of interaction . We therefore make no a priori commitments about machinery. We do not favor any particular architecture until a particular activity has been analyzed. Nor do we make any a priori commitments about matters such as analog versus digital, ``planning'' versus ``reaction,'' and so on. Our experience has been that real lifeworlds and real activities incorporate a great deal of useful dynamic structure, and that any effort we invest in studying that structure will be repaid in parsimonious theories about machinery. But we intend our methods to be equally useful for investigating all types of activity and designing all types of machinery that might be able to participate in them.
The concept of a lifeworld will not appear as a specific mathematical entity in our formalism. The intuition, however, is this: while there is an objective material environment, the agent does not directly deal with all of this environment's complexity. Instead it deals with a functional environment that is projected from the material environment. That projection is possible because of various conventions and invariants that are stably present in the environment or actively maintained by the agent. The lifeworld should be understood as this functional world together with the projection and the conventions that create it. This section summarizes the formal model of environmental specialization given by Horswill ; for proofs of the theorems, see the original paper. Subsequent sections will apply and extend the model.
We will model environments as state machines and the behavior of agents as policies mapping states to actions.
Figure 1: The environment (left) and and the serial product of with itself, expressed as graphs. Function products have been written as pairs, i.e. is written as (inc,i). Identity actions (i and ) have been left undrawn to reduce clutter.
For example, consider a robot moving along a corridor with n equally spaced offices labeled 1, 2, 3, and so on. We can formalize this as the environment , where i is the identity function, and where and dec map an integer i to i+1 and i-1, respectively, with the proviso that dec(0)=0 and (see Figure 1). Note that the effect of performing the identity action is to stay in the same state.
We emphasize that a policy is a model of an agent's behavior, not of the causal/computational processes by which that behavior is exhibited. It specifies what an agent does in each state, not how it does it. It is thus a theoretical construct, not a data structure or algorithm in the agent's head. We will examine the implementation issues that surround policies in section 8.