This piece was written for the IJCAI-95 Workshop on AI, Art, and Entertainment.
AI applied to art and entertainment is in a whole new domain. It would only be natural for the science and engineering practices of AI to have to change --- or at least expand --- in response to the new challenges posed by this novel area. In this paper I will explore some of the fundamental changes in mindset that may have to be made in order to develop AI programs that do useful things in this new domain. Specifically, the mindset and assumptions in the `soft' field of art and entertainment are very different from those of the `hard' field of AI; my argument is that there is a culture clash between these two fields as they are now understood, and that making AI programs that are truly useful for people interested in creating entertaining social works will be made much easier by explicitly considering the differences between the fields and learning to develop a new kind of AI technology that is also at home in art and entertainment.
AI practitioners venturing into the world of art and entertainment are a little bit like the Europeans when they first came across America. A whole new field is open for exploration, where the technology we have has never been used and can point to all kinds of interesting new ways of living in this new territory. We could, if we choose to, proceed the same way the Europeans did: deal with the pesky natives by handing them technology that destroys their culture, clear them out of all the best areas, and start tearing up the environment to make it more amenable to the tools we already have. Aside from the obvious ethical implications, this strategy has a major fault. In ignoring the cultures and technologies of the people who were already there, and in attempting to go boldly forward with tools that were not developed with this environment in mind, we would be in danger of destroying or at the very least failing to connect with precisely the things that attracted so many of us to this new area in the first place: not a quick buck, but the inherent interest of working in fields and with colleagues that engage people in positive, creative, fascinating, and somewhat alien ways every day. By imposing our own standards and worldview on this domain, rather than considering and perhaps adapting to those that are already in place, we are in danger of creating technology that is at least irrelevant to the domain and at worst destructive of it.
Obviously this story is a little overstated: I am not accusing AI researchers of plotting to take over art and put artists in intellectual backwater reservations. Yes, currently AI does have more funding and political backing than art, and more epistemological clout and technological gee-whiz-ness than either art or entertainment; still, a wholesale takeover of the standards of art seems impossible. But then, I don't think the Westerners were plotting a dastardly takeover either; the destruction of native cultures and the environment came about gradually as a side-effect of a basic culture clash that the Europeans themselves did not fully understand. Cultural relativism was not a fashionable concept in the first few centuries of what we now know as the United States, and the consequence of this is that Europeans took their own culture for granted. They were blind to the assumptions inherent in their culture and technology, and because of this had a hard time both having respect for the cultures and technologies that were already there and adapting their way of existence to their new environment. Instead, many chose to bulldoze that environment out of their way. The question is not who is doing what to whom, but how can we avoid making the same mistake the Europeans did, and end up in our enthusiasm about our technology missing the artistic point entirely?
Certainly many AI researchers in this area are already fascinated by the arts and the world of entertainment. Dialogue between the two sides is helpful in keeping the interaction between researchers and artists from devolving into the sorts of `imperialism' with which Herb Simon characterizes his forays into literary criticism. This dialogue has already begun, including the AAAI 1994 Believable Agents Symposium and 1995 Interactive Story Symposium, at which AI researchers, artists, and other interested people exchanged their views. I want to propose that there are a number of other things that we as AI researchers can do to make our attempts to interface with art and entertainment as easy on ourselves and as nondisturbing of `native' traditions as possible. When we consider how to adapt AI we tend to spend most of our intellectual energy considering how the technology which we already have also applies to the new domain. An obvious, but not necessarily easy, adjustment to this technique to make it more domain-friendly is to also think about what kinds of (perhaps not-yet-developed) technology might naturally fit in to the new domain: what kinds of things would artists actually like to use? what kinds of things really fit in with their notions of what they are doing?
To do this it is not enough to take a straw poll of artists and creators of entertainment, because they probably understand the possibilities of AI technology as little as we understand their work. It really requires an understanding of the culture and possibilities of both sides to be able to develop a technology from the old domain that fits comfortably and more-or-less naturally into the new domain. In particular, I am suggesting that we consider ways in which their cultures differ, and examine how AI technology is an integral part of the culture of AI. With this background we will be able to start considering alternative AI architectures that could better fit into the existing culture and goals of art and entertainment --- or expand upon them in interesting ways. Here I will (nonexhaustively) list a number of ways in which the arts and entertainment traditionally differ from AI as an engineering discipline, along with some implications for these differences for AI architectures that want to work in these new fields.
In science and engineering, we have the luxury of considering ourselves and our audience to a certain extent to be objective onlookers, so that we need concern ourselves only with what our programs are `truly' capable of in the eyes of our colleagues. In the arts and entertainment, however, we are dealing with a more heterogeneous audience, and here the `actual' capabilities of the program as defined and understood by computer science are less interesting than the variety of ways in which the audience [as well as the designer] of the work may perceive it. A work will not be successful if it cannot communicate with the audience for which it is intended.
This means that for our audiences, deep structure is less important than surface appearances (although, of course, it is important to the extent that it shows up in the surface). This leads to difficult and, to us, unnatural changes in the standards by which programs may be judged. This is of course exemplified by Eliza, which, though laughable scientifically, still built up a cult following outside of computer scientists. In addition, thinking about the particular audience of the program as a demographic group may actually help to make the process of program design easier, just as limiting the design of an agent to the particular environment it will encounter simplifies matters. The creator can take advantage of a knowledge of what that group accepts, ignores, and finds interesting to decide how to build a system that will best speak to that particular audience. For AI researchers, whose audience has been other scientists, this has traditionally meant building goal-oriented, task-based tools; for other audiences, rationality may be unnecessary or even seem unnatural.
What difference does this make to AI architectures? Programs are generally designed in AI to be able to go out into the world on their own. The programming paradigm is that the user will take the architecture and design an agent or program that can do certain things. In an alternative paradigm that could be more amenable to the artistic point of view, architectures and programs would be seen as a method of communication between the creator and his or her audience. For example, in designing artificial characters it would become less important that an agent be able to be programmed to do particular behaviors and more important that the creator be able to specify the sequence or structure of behaviors at the level at which the audience will interpret them. We will need to stop talking so much about the behaviors of which the agent in and of itself is capable and more about the signs that the agents communicate to the audience, and the ways in which those signs can be manipulated by the builder.
This difference in worldview has various implications for AI-based arts and entertainment applications. The most obvious one (with which few AI researchers would dispute) is that electronic media can not be considered to supersede the old media; each category of media allows for particular kinds of creative expression, neither of which can indisputably be said to be `better' than the other. Another uncontroversial effect is that when we are creating computational works we need to think of the possibilities inherent in the medium itself, rather than trying to translate directly from old media. Merely copying old media into computational format gives you a pale imitation of a medium, rather than exploiting the possibilities inherent in the machine.
A more subtle effect of this change in point of view comes from the notion of constraints. Artists and craftspeople who have explored a medium understand that media, while highly flexible, also impose constraint. Media do not allow for unencumbered `implementation' of ideas, but rather give form and shape to ideas so that they become embodied in particular and highly specific ways. That embodiment may require subtle or drastic alterations of the initial idea: perhaps the original idea didn't really `work,' or new ideas spring up from the interaction of creator and material. It would only be natural for this feeling of constraint --- an understanding of the way the medium shapes the things one says and, in effect, prevents one from saying certain things --- to extend to the computational medium. In this sense, working with artists or working as an artist, computer scientists can get an understanding of the ways in which the medium in which they work enforces the creation of particular styles of product, as well as some intuitive feel of the sorts of things that remain outside the possibility of computational implementation.
 Herbert A. Simon. "Literary Criticism: A Cognitive Approach." Manuscript copy.