next up previous
Next: The Language Up: 3 Previous: Introduction

Background

 

This chapter will concentrate on the application of a population of co-evolved genetic recombination operator programs to the main population of programs they manipulate. Before the details of this process are explained, background information on similar approaches or goals will be outlined. The most important areas of work related to this chapter's focus are in the language representation that the main population programs (and SMART operator programs) are written in, and alternate techniques for dynamic recombination paradigms.

PADO's language representation, introduced in the next section, is structured not as a tree or DAG, but as an arbitrary graph with directed edges. At least graphically, a PADO program is reminiscent of Turing machines and Finite State Automata. Our indebtedness to the idea of graphs as representation for programs is wider and deeper than we can cite here. In evolutionary computation, the technique called evolutionary programming (EP) has been used to train graph structures like finite state machines and neural networks (e.g., [Saravanan and Fogel 1994]).

One work worth mentioning as an example is Karl Sims' work on evolving arbitrary graph structures that represent the morphologies of virtual animals. In [Sims 1994], for example, the question of how to ``crossover'' two graphs is addressed and dealt with in a simple and static way. Essentially, two graphs are randomly divided into two disjoint sets, one set from each graph is exchanged, and the ``dangling'' arcs are randomly reconnected. In section 3.4 we will return to this issue in search of more intelligent solutions.

The idea of evolving evolvability is not new. [Altenberg 1994] contains a summary of some past work and current ideas in this area. The concept of Adaptive Representations has been in the GA field for years. Adaptive representations encompasses all dynamic changes to the environment and the rules which govern how an evolving population grows and changes. Both in GA (e.g., [Julstrom 1995]) and in GP (e.g., [Rosca and Ballard 1995]) current research is investigating what can be improved about an evolutionary system by augmenting the environment to change probabilities, encapsulate evolved structures, and choose among existing operators. In chapters 4 and 5 of this book, Angeline and Iba respectively discuss at length very different approaches in the same field of adaptive operators.

The idea of learned programs that change (and hopefully improve) learned programs is not new to this chapter. For example, Schmidhuber describes how one might in principle learn to do this in a recurrent Neural Network [Schmidhuber 1993]. In [Schmidhuber 1987] Schmidhuber describes how this might be done with more traditionally structured programs, where each group of programs learns how to helpfully change the group of programs below it in a chain of program groups that is potentially without end.



next up previous
Next: The Language Up: 3 Previous: Introduction



Eric Teller
Tue Oct 29 17:04:55 EST 1996