
Genetic Algorithms Digest   Thursday, September 26 1991   Volume 5 : Issue 30

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- Re: Journal
	- Crossover
	- Re: order operators

**********************************************************************

CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)

 First European Conference on Artificial Life (v5n10)         Dec 11-13, 1991
 Canadian AI Conference, Vancouver, (CFP 1/7)                 May 11-15, 1992
 10th National Conference on AI, San Jose, (CFP 1/15)         Jul 12-17, 1992
 ECAI 92, 10th European Conference on AI (v5n13)              Aug  3-7,  1992
 Parallel Problem Solving from Nature, Brussels, (CFP 4/15)   Sep 28-30, 1992

 (Send announcements of other activities to GA-List@aic.nrl.navy.mil)

**********************************************************************
----------------------------------------------------------------------

From: abbott@aero.org (Russell J. Abbott)
Date: Tue, 17 Sep 91 12:31:20 PDT
Subject: Re: Journal

   Regarding a possible journal, Kenneth De Jong wrote:>

   >  There is general agreement that "systems" is too broad a term (e.g., 
   >  genetic systems, evolutionary systems, ...).

   I'm sorry to hear that since "Evolutionary Systems" would be my first
   choice.  As Melanie Mitchell wrote (admitedly in support of the name
   "Evolutionary Computation"):

   >   My opinion is that if we have a journal, its focus should be on the
   >   theory and applications of computational systems that use ideas
   >   from evolution (not restricted to "standard" GAs), and on the
   >   feedback that research on such systems provide to the study of
   >   biological evolution, or evolution in general.

   I would broaden the focus description by removing the term
   "computational."  I guess a key question is: Do we really want to
   exclude the study mechanisms inherent in real-life evolutionary systems?
   Put another way, Melanie's focus description allows for feedback from
   computational evolutionary system to the study of biological
   evolutionary system.  Do we really want to exclude the possibility of
   learning from those systems as well?

   -- Russ Abbott@uniblab.aero.org

------------------------------

From: spears@AIC.NRL.Navy.Mil
Date: Wed, 18 Sep 91 11:48:53 EDT
Subject: Crossover

      There have been a few comments recently on crossover operators
      (of the uniform and "non-uniform" variety), that kindly reference
      our 1991 GA Conf paper, so this seems a good time to join the
      conversation! In response to...

   >  From: christer@cs.umu.se

   >  The results presented in table 4 implies that 1 crossover is 
   >  made approximately every 24 bits. This would be equal to a
   >  0.04-uniform crossover.

   >  1 crossover is made approximately every 35 bits, which equals
   >  a 0.03-uniform crossover.

   >  However, the only study that involves a uniform crossover
   >  (different from the 0.5-uniform crossover) I know of, "Biases in
   >  the crossover landscape" (ICGA89) by Schaffer (yet again!) et al.,
   >  implies that 0.25-uniform crossover is _worse_ than 0.5-uniform
   >  crossover, and that 8-point crossover is the "best" crossover. With
   >  the functions and parameter settings used for this study, 8-point
   >  crossover would approximately equal a 0.16-uniform crossover.

   >  Without having done any empirical (nor theoretical) studies I would
   >  be tempted to say that a uniform crossover with a low 'string
   >  cross' probability could turn out to perform very well, if it
   >  weren't for the somewhat contradictory result just mentioned.


      The reason for the apparent discrepancy is that the following 2
      statements are not at all equivalent:

      1) 1 crossover occurs every x bits.
      2) Uniform crossover is applied with probability 1/x.

      When one crossover occurs every x bits, this defines a whole
      distribution of n-point crossovers, which is not at all like the
      distribution of n-point crossovers when (1/x) uniform crossover
      is applied.

      For example, if a bit string is 20 bits long, and a crossover
      point occurs every 10 bits, roughly 1/2 of the bits are exchanged.
      For .1 uniform, only a few bits will be exchanged. From a disruption,
      recombination, and explorative standpoint, the two are quite
      different.

      It also is not strange that 8 pt crossover yielded better results
      than .25 or .5 uniform. For any arbitrary problem and population
      size, some level of disruption will be the "best". If the disruption
      is too small, exploration suffers. If the disruption is too large,
      exploitation suffers. .25 uniform is less disruptive than 8-pt
      crossover, which is less disruptive than .5 uniform. It is quite
      possible that .5 uniform is TOO disruptive, while .25 uniform is
      not disruptive enough.

   >  From: jrv@sdimax2.mitre.org

   >  This suggests that when use of this crossover operator is documented,
   >  a parameter should be stated: the fraction (say, P_p) of elements
   >  inheriting absolute position.

   >  This parameter corresponds to the parameter P_0 in the parameterized
   >  uniform crossover operator described by Bill Spears and Ken De Jong
   >  [2].  Note however that only the range 0 < P_0 <= .5 is interesting due
   >  to symmetry, whereas the full range 0 < P_p < 1 is of interest --
   >  P_p < .5 emphasizing order and P_p > .5 emphasizing position.

      Yes, it would be interesting to see a disruption theory for such
      operators (perhaps parameterized by P_p). The 1989 paper by
      Whitley, Starkweather, and Fuquay introduces a low disruption
      TSP operator called "edge recombination". It is interesting to note
      that GENITOR is run with relatively large populations when this
      operator is used, confirming our ideas that large populations are
      required with low disruption crossover (while small populations are
      necessary for high disruption crossover). In fact, I would expect
      Mark Lidd (see earlier issues of this digest) to have similar
      results with standard n-point (and uniform) crossover on TSP
      problems (indicating perhaps that all these special TSP operators
      are really not necessary???).

      Bill Spears@aic.nrl.navy.mil

------------------------------

From: Gilbert Syswerda <syswerda@BBN.COM>
Date: Wed, 25 Sep 91 22:05:31 EDT
Subject: Re:  order operators

   >From: starkwea@CS.ColoState.EDU (tim starkweather)
   >
   >In reply to comments made in:
   > 
   >> From: jrv@sdimax2.mitre.org
   > 
   >> I was not able to attend ICGA91, but I'm now working my way through the
   >> proceedings.  Starkweather et al.  describe two sequencing operators in
   >> [1], both credited to Syswerda [2], which appear to me to be
   >> equivalent.
   > 
   >> In each case, one group of elements inherit their absolute positions
   >> from one parent and the rest inherit their relative positions from the
   >> other parent.  In Order Crossover #2, the cross points indicate the
   >> elements inheriting relative positions.  In Position Based Crossover,
   >> the cross points indicate the elements inheriting absolute positions.
   > 
   >  After carefully reviewing the operators (along with D. Whitley
   >and K. Mathias), we agree that what is
   >presented as Order Crossover #2 and Position-based Crossover in [1]
   >are basically equivalent, if the number of crossover points averages
   >half the length of the string.  Furthermore,  we found there is a
   >simple transformation which shows both of these operators will
   >yield identical results (assuming complementary crossover points are used).
   >The following example transforms the "order crossover #2"
   >to the "position-based" operator.
   >Consider the following strings:
   > 
   >      P1:   a b c d e f g h i j
   >      P2:   e i b d f a j g c h
   >X-points:     * *   *     *
   > 
   >Assume that P1 is selected.  The order "b c e h" is imposed on P2.
   > 
   >Thus,       e i b d f a j g c h   (Phase 1)
   >            -   -           - -
   >becomes
   >            b _ c _ _ _ _ _ e h
   >
   >and the remaining elements are copied directly from P2:
   >            b i c d f a j g e h
   >
   >To effect the transformation with the "position-based" operator,
   >compliment the positions in (Phase 1) above for the crossover sites:
   >      P1:   a b c d e f g h i j
   >      P2:   e i b d f a j g c h
   >X-points:     *   * * * * *
   > 
   >Crossover sites are directly copied from P2:
   >            _ i _ d f a j g _ _
   >Relative order of remaining elements are taken from P1:
   >            b i c d f a j g e h
   >
   >This yields the same offspring as above since the elements are directly
   >copied from these positions in both cases and the remaining elements
   >are ordered accordings to the relative order or the other parent in
   >both cases.   The operators are identical--not because they produce the
   >same results given the same crossover points, but because given
   >sets of crossover points that are complementary in (Phase 1), they
   >are identical.   Therefore, if 50% of the points are selected for
   >recombination, the operators are the same in expectation.

   This is a nice analysis, but it has a problem: it does not work for both
   children. Taking view that C1 is P1 modified by P2, and vice versa for C2,
   we get:


   Order-based:

	 P1:   a b c d e f g h i j
	 P2:   e i b d f a j g c h
     X-mask:     * *   *     *
       C1_o:   a i c d e b f h g j
       C2_o:   b i c d f a j g e h


   Place-based:

	 P1:   a b c d e f g h i j
	 P2:   e i b d f a j g c h
     X-mask:     *   * * * * *      (from transformation)
       C1_p:   b i c d f a j g e h
       C2_p:   i b a d e f g h j c

   Note that C2_o is the same as C1_p, but C1_o is very different from C2_p.
   Using the transformation, if C1_o is to be generated by the place-based
   operator, the required mask is * _ * * * _ _ * _ *. This has the same
   probability of being generated as the others, but place-based crossover
   cannot generate both C1_o and C2_o with the same mask. If one keeps both
   children, as I do in [2], the resulting behavior of the two operators will
   be different.

   Whether only one child or both children should be kept is an interesting
   question. Note that if masks are generated with a probability other than
   .5, and only one child is kept, then P1 x P2 is not equivalent to P2 x P1.
   This is because parents generally have a higher than average fitness, and
   after P1 is chosen, the population P2 is chosen from has a lowered average
   fitness, resulting in P1 having a higher average fitness than P2. Since the
   mask is biased, the children will also have different average fitnesses.
   This can be fixed by randomly selecting which child to keep, or by allowing
   duplicates in the population and allowing P1 to equal P2 (or keeping both
   children).

   A minor point: the term "crossover point" already has a clearly
   defined meaning in the case of 1, 2, and n-point crossover, and depicts a
   place between loci. It may be better to use another term, such as "mask,"
   to indicate crossover operators that apply to loci instead of the breaks
   between them.

   >
   >> In the experiments reported in [1], the two operators gave different
   >> results.  The only explanation I can think of is that the fraction of
   >> elements inheriting absolute positions was different.  The descriptions
   >> quoted above don't give any guidance on this point ("positions are
   >> chosen randomly" vs.  "several random locations")
   > 
   >This is indeed the reason why the 2 operators produced different results
   >on the problems in [1].  The number of crossover points used for each
   >operator was on average less than half the length of the string.
   >Thus, the "position-based" operator was really emphasizing relative
   >order, since the number of positions inheriting relative order information
   >from one parent was greater than the number of points which were directly
   >copied from the other parent.  The reverse is true for the operator
   >described as "order crossover #2".  In the scheduling application
   >described in our paper, where order information was deemed the crucial
   >information, this explains why the "position-based" operator actually
   >performed a little better.

   > 
   >> [1] T. Starkweather, S. McDaniel, K. Mathias, D. Whitley, and C. Whitley,
   >> "A Comparison of Genetic Sequencing Operators,", ICGA91, pp. 69-76
   > 
   >> [2] G.  Syswerda, "Schedule Optimization Using Genetic Algorithms," in
   >> Handbook of Genetic Algorithms, L.  Davis, ed., 1990.

------------------------------
End of Genetic Algorithms Digest
******************************
