Newsgroups: comp.ai.genetic
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!howland.reston.ans.net!math.ohio-state.edu!scipio.cyberstore.ca!skypoint.com!umn.edu!news
From: "James Albert Larson" <larso171@maroon.tc.umn.edu>
Subject: Re: Genetic Algorithms waste of time
To: Jeff_Mandel@Harvard.edu
Message-ID: <33491.larso171@maroon.tc.umn.edu>
X-Minuet-Version: Minuet1.0_Beta_16
Sender: news@news.cis.umn.edu (Usenet News Administration)
Nntp-Posting-Host: dialup-4-65.gw.umn.edu
X-Popmail-Charset: English
Organization: University of Minnesota, Twin Cities
Date: Fri, 12 May 1995 11:07:44 GMT
Lines: 65

On Tue, 09 May 1995 21:46:54 -0400, 
Jeff E Mandel  <Jeff_Mandel@Harvard.edu> wrote:

>When searching a
>continuous parameter space for several variables, any two candidate
>solutions can be seen to be different by a distance along a vector in this
>space. Thus, if the second improves on the first, it might be reasonable
>to ask if a further extension along that vector is better still; if not,
>perhaps a better vector can be found. I've not seen anyone explore this
>direction. Comments?
>
Zbigniew Michalewicz in his book, Genetic Algorithms + Data Structures = 
Evolution Programs investigated this concept in what he calls heuristic 
crossover.  (2nd edition, 1994 p. 156).  This was a CO operator used in 
his Genocop program to minimize f(x) subject to linear constraints, where
x is a vector <x1,x2,...,xn> and f is a function (linear, nonlinear,
continuous, stairstep, just about anything).  The xi are Reals.

Given parents xa = <xa1,xa2,...,xan> and xb = <xb1,xb2,...,xbn>
then the heuristic CO produces the child

    xc = r . (xb - xa) + xb

Where r is a random number between 0, and 1, and the parent xb is not
worse than xa, i.e. f(xb) >= f(xa) for maximization problems.

So for the simple case where n is 1, and where xa = <3> and xb = <7>, 
and r is as in the table below, then xc will be as shown (and I've
also shown what happens in a reverse case where xa is = <7> and xb is 
<3>, and where xb is still the superior solution),
     xa  xb  r    xc
     3   7   0    7
            0.5   9
            1.0  11
   -------------------
     7   3   0    3 
            0.5   1
            1.0   -1

So what the above shows is that when r is 0, then xc = xb, the superior 
vector.  And for any other value of 0 < r <= 1, that   xb is in 
between xa and xc, i.e. xc is like an extrapolation beyond xb:

         xa   xb    xc
         -+----+----+--    In both of these,
                           xa is the inferior parent, xb is the superior
or       xc   xb   xa      parent, and xc is the child.
        -+----+-----+- 
    
He found that Genocop gave considerably better solutions with it than 
without it, and gives an example a very busy-looking nonlinear function
of 4 variables x1,x2,x3,x4.  Without it, after 10000 generations,
a typical output is f = 0.000684, and with it, f = 0.000001.  (The optimal
of the function is 0).  Its major responsibilities are (1) find local 
tuning, and (2) search in a promising direction.

I haven't seen this concept (which I call extrapolating crossover), 
anywhere else either.  Seems like its always worth trying out, along with 
several other genetic operators.  (In Genocop, there are, off the top of my 
head, 3 crossover operators, and 3 mutation operators.  The default is that 
there is a selector that each time a genetic operator is needed, randomly 
selects one of the 6 genetic operators with equal probability).

Jim Larson

