next up previous
Next: LIFEWORLDS Up: Lifeworld Analysis Previous: INTRODUCTION

THE CONCEPT OF THE ENVIRONMENT

Intuitively, the notion of ``the environment'' in AI and robotics refers to the relatively enduring and stable set of circumstances that surround some given individual. My environment is probably not the same as yours, though they may be similar. On the other hand, although my environment starts where I leave off (at my skin, perhaps), it has no clear ending-point. Nor is it necessarily defined in terms of metric space; if physically distant circumstances have consequences for my life (via the telephone, say) then they are properly regarded as part of my environment as well. The environment is where agents live, and it determines the effects of their actions. The environment is thus a matter of importance in computational modeling; only if we know what an agent's environment is like can we determine if a given pattern of behavior is adaptive. In particular we need a positive theory of the environment, that is, some kind of principled characterization of those structures or dynamics or other attributes of the environment in virtue of which adaptive behavior is adaptive.

Herbert Simon discussed the issue in his pre-AI work. His book Administrative Behavior [37], for example, presents the influential theory that later became known as limited rationality. In contrast to the assumption of rational choice in classical economics, Simon describes a range of cognitive limitations that make fully rational decision-making in organizations impracticable. Yet organizations thrive anyway, he argues, because they provide each individual with a structured environment that ensures that their decisions are good enough. The division of labor, for example, compensates for the individual's limited ability to master a range of tasks. Structured flows of information, likewise, compensate for the individual's limited ability to seek this information out and judge its relevance. Hierarchy compensates for the individual's limited capacity to choose goals. And fixed procedures compensate for individuals' limited capacity to construct procedures for themselves.

In comparison to Simon's early theory in Administrative Behavior, AI has downplayed the distinction between agent and environment. In Newell and Simon's early work on problem solving [29], the environment is reduced to the discrete series of choices that it presents in the course of solving a given problem. The phrase ``task environment'' came to refer to the formal structure of the search space of choices and outcomes. This is clearly a good way of modeling tasks such as logical theorem-proving and chess, in which the objects being manipulated are purely formal. For tasks that involve activities in the physical world, however, the picture is more complex. In such cases, the problem solving model analyzes the world in a distinctive way. Their theory does not treat the world and the agent as separate constructs. Instead, the world shows up, so to speak, phenomenologically: in terms of the differences that make a difference for this agent, given its particular representations, actions, and goals. Agents with different perceptual capabilities and action repertoires, for example, will inhabit different task environments, even though their physical surroundings and goals might be identical.

Newell and Simon's theory of the task environment, then, tends to blur the difference between agent and environment. As a framework for analysis, we find the phenomenological approach valuable, and we wish to adapt it to our own purposes. Unfortunately, Newell and Simon carry this blurring into their theory of cognitive architecture. They are often unclear whether problem solving is an activity that takes place wholly within the mind, or whether it unfolds through the agent's potentially complicated interactions with the physical world. This distinction does not arise in cases such as theorem-proving and chess, or in any other domain whose workings are easily simulated through mental reasoning. But it is crucial in any domain whose actions have uncertain outcomes. Even though we wish to retain Newell and Simon's phenomenological approach to task analysis, therefore, we do not wish to presuppose that our agents reason by conducting searches in problem spaces. Instead, we wish to develop an analytical framework that can guide the design of a wide range of agent architectures. In particular, we want an analytical framework that will help us design the simplest possible architecture for any given task.


next up previous
Next: LIFEWORLDS Up: Lifeworld Analysis Previous: INTRODUCTION

Ian Horswill
Wed Apr 2 15:17:20 CST 1997