This proposal was written for the AAAI-97 Workshop on Socially Intelligent Agents.

Towards Socially Intelligent Agent-Building

Phoebe Sengers

Department of Computer Science /
Program in Literary and Cultural Theory
Carnegie Mellon University

5000 Forbes Ave.
Pittsburgh, PA 15213 USA
phoebe@cs.cmu.edu
phone (412) 268-3608
fax (412) 268-5576

For the past several decades, AI has been the province of a very small and relatively homogeneous segment of humanity, i.e. AI researchers. These researchers, mostly scientists, have tended to think of agents basically as problem-solvers, tools for getting work done. With the explosion of powerful personal computing and the Web and the accompanying popularization of high tech, there are many more people coming into contact with AI agents like Julia, Ahoy!, and Firefly as well as more and more advanced PC software such as Dogz, Creatures, and the Japanese Tamagotchi. If old AI is chess players, shop floor schedulers, and planners only a scientist could love, new AI is agents with which non-expert persons come into contact, that have social effects, that can communicate with users and that may even be fun to have around. If these programs are to be built effectively, they cannot simply solve mathematically formalized problems in a rational, if not outwardly understandable, manner, but rather must be designed with the social and cultural norms and expectations of the target audience in mind.

My research program focuses on `socially situated AI,'[1] i.e. methodologies for building agents with reference not only to their physical but also to their social and cultural environment. I believe it is not enough to build agents that try to be social, but that the process of agent-building itself must become `socially intelligent' by being aware of the contexts into which agents will be inserted. Because the people who will interact with the agents are always kept in mind, agents can be tailored to maximize their effectiveness for their target audience. In this sense, agents built for social contexts can be more correct than purely rational, problem-solving style agents.

More specifically, a social agent may need to be able to communicate its intentions effectively to a user and to fulfill particular, culturally situated human social norms. To give an agent these kinds of abilities, the agent designer may need to consciously design an agent with an eye to the way in which the agent will be interpreted. Building agents that can function effectively in a social context is unlikely to be a simple add-on functionality but may effect the entire structure of an agent; even parts previously considered to be pure problem-solving may need to be altered to allow the abilities and goals of the agent to be clear to a user and to allow the agent to work effectively in an environment including specific human social norms.

For example, my thesis work[2] applies the viewpoint of socially situated AI to the action-selection problem in behavior-based AI. Traditionally, this problem has been framed as finding the best action that an agent can choose at any point to fulfill its internal goals. The agent is then programmed to continuously consider its range of actions, repeatedly selecting new actions and behaviors based on the agent's current drives and the world state. While this can deliver a reasonable quality of behavior in terms of fulfilling the agent's pre-programmed goals, the agent's behavior may be very confusing to the user, since the agent is continuously switching from one activity to another. When seen in a social context, the action-selection problem can be more effectively redefined as what Tom Porter terms the `action-expression' problem[3]: what should the agent do at any point in order to best communicate its goals and activities to the user? Instead of focusing on behavioral correctness per se, the action-expression problem is interested in increasing the quality of the agent's behavior and its comprehensibility for humans with which the agent will interact.

The action-selection problem is often addressed by building more and more complex decision-making algorithms into the agent's mind, i.e. selecting the Right behavior. The action-expression problem is less focused on *what* the agent does and instead interested in *how* the agent does it, i.e. engaging in and connecting its behaviors in an effective way. In my thesis work, behaviors are thought of as 'activities to be communicated to the user;' they are designed with their comprehensibility in mind; and they are connected with *behavior transitions*, special behaviors that function to explain why the agent's behavior is changing and what its intentions are. For example, if an agent is playing a game with the user but is now hungry, most action-selection algorithms will cause the agent to abruptly leaving the game and head for the food bowl, leaving the user wondering why the agent no longer wanted to play. With behavior transitions, the agent may communicate its hunger by slowing down and looking frequently at its food, end the game by returning the toy to the user, then happily head for its food bowl, with the user understanding that the agent is not angry at the user but has simply gotten hungry.

Behavior transitions and action expression are a way in which socially aware agent building may allow agents to become more understandable to the user. In general, making agent-building socially aware means designers will be encouraged to think about how the behavior of their agents will be received by their target audience. This way, designers may be more likely to build agents that can address their audience and respect their audience's social conventions, rather than being pure problem solvers who cannot care or reason about what their social partners think of them.

Acknowledgments

This work was done as part of Joseph Bates's Oz Project, and was funded by the ONR through grant N00014-92-J-1298.

References

[1] Phoebe Sengers. "Socially Situated AI: What It Is and Why It Matters." In Proceedings of AAAI-96 Workshop on AI / A-Life and Entertainment. AAAI Technical Report WS-96-03. Menlo Park, CA: AAAI Press.

[2] Phoebe Sengers. "Symptom Management for Schizophrenic Agents." In Proceedings of AAAI-96. Menlo Park, CA: AAAI Press. vol. 2, page 1369.

[3] Tom Porter. "Depicting Perception, Thought, and Action in Toy Story." Invited Talk, Autonomous Agents '97.