next up previous
Next: Minimal Cost Results with Up: A Study in Program Previous: MBPR: Tree Representations

Positive Results with MBPR


All of the experiments in this paper refer to four different experimental conditions that were run together for comparisons. Below is a list of the four types of experiments as they are annotated in the tables and an explanation of what they mean.

When N = 1, no indexing is necessary, and READ becomes a terminal and WRITE becomes a single arity function. When N = 0, READ is no longer included in the function set. Single arity WRITE is left in the function set to save values, but they can not be referenced later by the program. The ProblemType-RPBR-M runs were only performed when N > 0, because when N = 0, the memory functions have no effect on the output in RPBR case, and are therefore equivalent to their inert counterparts used in the ProblemType-RPBR-X runs.

Table 1: The 5-Parity problem

All the results presented in this paper had several methodological points in common. Where the methodologies differed, the differences will be mentioned when the data is presented. For all of the experiments, we used tournament selection with sample size of 8. Unless otherwise noted, the maximum number of generations was 101. All the runs discussed in this paper with a maximum node limit of 2000 or above had an initial maximum depth of 10. In addition, the runs with the Gudermanniangif function on the N=0 case with a maximum node limit of 1000 also used an initial maximum depth of 10. All other runs used had an initial maximum depth of 8. The recombination percentages for all experiments were: 10% copy, 1% mutation, 89% crossover (10% leaves, 79% internal nodes).

The first problem on which we tested the MBPR paradigm was the 5-Parity problem. There were two motivations for choosing this problem. The first was that, as the most classic GP benchmark, it is one that most readers are familiar with. The second reason for choosing a parity problem was the desire for a simple, but not trivial, problem where a subtree has often been observed to express a correct calculation.

This problem was run on a medium-grained parallel Parsytec computer system consisting of 64 80 MHz Power PC 601 processors arranged in a toroidal mesh with a host PC Pentium type computer. The so-called distributed genetic algorithm for parallelization was used with a population size of Q = 500 at each of the D = 64 demes. On each generation, four boatloads of emigrants, each consisting of B = 5% (the migration rate) of the node's subpopulation (fitness-based selection) were dispatched to the four toroidally adjacent processing nodes [Andre and Koza1996]. This hardware was only used on the Boolean runs. All other runs were performed on a Pentium.

The results on the 5-parity problem are presented in table 1. Not only does the MBPR take 33% of the computational effort of RPBR on the 5-parity problem, but the MBPR style of response actually solves 100% of the time instead of the 96% shown here for RPBR. This large improvement for MBPR may be due to either one or both of the differences between MBPR and RPBR: the addition of memory or the memory based response. In the case where memory is added (5-parity-RPBR-M), 100% of the runs found a perfect solution, and it requires only 48% of the computational effort of the 5-parity RPBR. Taking the answer from memory helps, but just adding memory accounts for more than half of the difference in computational expense.

Clearly, the most dramatic aspect of the data in table 1 is the large amount of computational effort and the low number of solved runs for the 5-Parity-RPBR-X problem. When the inert READ and WRITE operators are added, computational effort increases by a factor of five and solutions occur less than half the time. This is surprising. Why would adding READ and WRITE functions that just passed back their last argument cause such a problem? This question will be addressed in sections 6 and 7. The results from the parity problem indicate that the MBPR approach works well on the most classic GP benchmark.

next up previous
Next: Minimal Cost Results with Up: A Study in Program Previous: MBPR: Tree Representations

Eric Teller
Tue Oct 29 16:58:50 EST 1996