next up previous
Next: 2 Background Up: Competitive Coevolution through Evolutionary Previous: Competitive Coevolution through Evolutionary

1 Introduction

Evolutionary Computation (EC) is a class of algorithms that can be applied to open-ended learning problems in Artificial Intelligence. Traditionally, such algorithms evolve fixed-length genomes under the assumption that the space of the genome is sufficient to encode the solution. A genome containing $n$ genes encodes a single point in an $n$-dimensional search space. In many cases, a solution is known to exist somewhere in that space. For example, the global maximum of a function of three arguments must exist in the three dimensional space defined by those arguments. Thus, a genome of three genes can encode the location of the maximum.

However, many common structures are defined by an indefinite number of parameters. In particular, those solution types that can contain a variable number of parts can be represented by any number of parameters above some minimum. For example, the number of parts in neural networks, cellular automata, and electronic circuits can vary . In fact, theoretically two neural networks with different numbers of connections and nodes can represent the same function . Thus, it is not clear what number of genes is appropriate for solving a particular problem. Researchers evolving fixed-length genotypes must use heuristics to estimate a priori the appropriate number of genes to encode such structures.

A major obstacle to using fixed-length encodings is that heuristically determining the appropriate number of genes becomes impossible for very complex problems. For example, how many nodes and connections are necessary for a neural network that controls a ping-pong playing robot? Or, how many bits are needed in the neighborhood function of a cellular automaton that performs information compression? The answers to these questions can hardly be based on empirical experience or analytic methods, since little is known about the solutions. One possible approach is to simply make the genome extremely large, so that the space it encodes is extremely large and a solution is likely to lie somewhere within. Yet the larger the genome, the higher dimensional the space that evolution needs to search. Even if a ping-pong playing robot lies somewhere in the 10,000 dimensional space of a 10,000 gene genome, searching such a space may take prohibitively long.

Even more problematic are open-ended problems where phenotypes are meant to improve indefinitely and there is no known final solution. For example, in competitive games, estimating the complexity of the ``best'' possible player is difficult because such an estimate implicitly assumes that no better player can exist, which we cannot always know. Moreover, many artificial life domains are aimed at evolving increasingly complex artificial creatures for as long as possible . Such continual evolution is difficult with a fixed genome for two reasons: (1) When a good strategy is found in a fixed-length genome, the entire representational space of the genome is used to encode it. Thus, the only way to improve it is to alter the strategy, thereby sacrificing some of the functionality learned over previous generations. (2) Fixing the size of the genome in such domains arbitrarily fixes the maximum complexity of evolved creatures, defeating the purpose of the experiment.

In this paper, we argue that structured phenotypes can be evolved effectively by starting evolution with a population of small, simple genomes and systematically elaborating on them over generations by adding new genes. Each new gene expands the search space, adding a new dimension that previously did not exist. That way, evolution begins searching in a small easily-optimized space, and adds new dimensions as necessary. This approach is more likely to discover highly complex phenotypes than an approach that begins searching directly in the intractably large space of complete solutions. In fact, natural evolution utilizes this strategy, occasionally adding new genes that lead to increased phenotypic complexity (Martin 1999; Section 2). In biology, this process of incremental elaboration is called complexification, which is why we use this term to describe our approach as well.

In evolutionary computation, complexification refers to expanding the dimensionality of the search space while preserving the values of the majority of dimensions. In other words, complexification elaborates on the existing strategy by adding new structure without changing the existing representation. Thus the strategy does not only become different, but the number of possible responses to situations it can generate increases (Figure 1).

\begin{figure}\centering
\pspic[\psline]{400pt}{!}{alter_elab2}
\titledcaption[...
...fication.
\vspace{-\baselineskip}% Make figure a little tighter
}
\end{figure}

In the EC domain of neuroevolution (i.e. evolving neural networks), complexification means adding nodes and connections to already-functioning neural networks. This is the main idea behind NEAT (NeuroEvolution of Augmenting Topologies; Stanley and Miikkulainen 2002b,c,d), the method described in this paper. NEAT begins by evolving networks without any hidden nodes. Over many generations, new hidden nodes and connections are added, complexifying the space of potential solutions. In this way, more complex strategies elaborate on simpler strategies, focusing search on solutions that are likely to maintain existing capabilities.

Expanding the length of the size of the genome has been found effective in previous work . NEAT advances this idea by making it possible to search a wide range of increasingly complex network topologies simultaneously. This process is based on three technical components: (1) Keeping track of which genes match up with which among differently sized genomes throughout evolution; (2) speciating the population so that solutions of differing complexity can exist independently; and (3) starting evolution with a uniform population of small networks. These components work together in complexifying solutions as part of the evolutionary process. In prior work, NEAT has been shown to solve challenging reinforcement learning problems more efficiently than other neuroevolution methods (Stanley and Miikkulainen 2002b,c,d). The focus of these studies was on optimizing a given fitness function through complexifying evolution.

In this paper, we focus on open-ended problems that have no explicit fitness function; instead, fitness depends on comparisons with other agents that are also evolving. The goal is to discover creative solutions beyond a designer's ability to define a fitness function. It is difficult to continually improve solutions in such coevolutionary domains because evolution tends to oscillate between idiosyncratic yet uninteresting solutions . Complexification encourages continuing innovation by elaborating on existing solutions.

In order to demonstrate the power of complexification in coevolution, NEAT is applied to the competitive robot control domain of Robot Duel. There is no known optimal strategy in the domain but there is substantial room to come up with increasingly sophisticated strategies. The main results were that (1) evolution did complexify when possible, (2) complexification led to elaboration, and (3) significantly more sophisticated and successful strategies were evolved with complexification than without it. These results imply that complexification allows establishing a coevolutionary arms race that achieves a significantly higher level of sophistication than is otherwise possible.

We begin by reviewing biological support for complexification, as well as past work in coevolution, followed by a description of the NEAT method, and experimental results.


next up previous
Next: 2 Background Up: Competitive Coevolution through Evolutionary Previous: Competitive Coevolution through Evolutionary
Kenneth O. Stanley 2004-02-08