
Genetic Algorithms Digest    Tuesday, 14 August 1990    Volume 4 : Issue 14

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- Miscellaneous... (From moderator)
	- Re: GA's and Concept Learning (2 Messages)
	- Re: Reflective GAs
	- Re: more warnings on random number generators (2 Messages)
	- Applying GAs to Process Control

******************************************************************************

CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)

Conference on Simulation of Adaptive Behavior, Paris (v3n21)  Sep 24-28, 1990
Workshop Parallel Prob Solving from Nature, W Germany (v4n5)  Oct 1-3,   1990
2nd Intl Conf on Tools for AI, Washington, DC (v4n6)          Nov 6-9,   1990
4th Intl. Conference on Genetic Algorithms (v4n9)             Jul 14-17, 1991

(Send announcements of other activities to GA-List@aic.nrl.navy.mil)

******************************************************************************

Date: Tues, 14 Aug 90
From: Alan C. Schultz (GA-List Moderator)
Subject: Miscellaneous...

>    Reminder: If you participated at the workshop, please send a brief
>    abstract of your presentation.  I will put these together and send
>    out several special issues of GA-List with the abstracts.
     
     The response has been fair so far.  I hate to grovel ;-)

    --Alan

-----------------------------------

Date: Tue, 7 Aug 90 11:53:13 -0400
From: havener@Kodak.COM
Subject: Re: GA's and Concept Learning

  Mr. Spears wrote:

 > 	I disagree only in the conclusion that classifier
 > 	systems (and GAs) are not good for concept learning.
 > 	Certainly, our work indicates otherwise, and Riyaz
 > 	Sikora has demonstrated feasibility as well. GAs
 > 	rarely perform well on simple problems, in comparison
 > 	with traditional techniques. However, in complex
 > 	multimodal spaces the global search characteristics
 > 	of GAs often win.

   I think there may be a role for GA/CS systems in "concept learning".
   I am not a theorists (I don't have the training for it) but we are
   very pragmatic and have real-world problems to help test technologies.

   We have had some good success with a commercial GA/Stat package called
   BEAGLE by Richard Forsyth of Warm Boot Ltd. This package takes in a
   target expression like (V1 > 45) and will analyze a historical database
   to develop a set of rules to explain when the target expression is
   true or false. The package uses Chi-square statistics to evaluate the
   fitness of the current generation of hypotheses, and uses GA concepts
   to breed-up the next generation. 

   In an attempt to learn more about Hall's YACL, and the work by Sikora,
   we have provided data to them for a problem that has been addressed via
   Beagle and via non-GA methods such as neural nets. 

   Given some time, we should be able to come up with a pragmatic view of
   the relative usefulness of these approaches (as they stand today)
   to address real-world problems. Though again, I do not have the
   depth of training to offer anything useful in the way of a
   theoretical comparison.

   I hope that these discussions are proper for this forum.
   My focus is to make use of the technology today, where the discussions
   in the forum tend to be those of the top researchers who's
   focus is advancing the theory. Their discussions are very much of
   interest to me, but my pragmatic efforts may not be of much interest to
   the majority of the distribution list. If this is not the right forum
   then I will not be offended if you do not include this note in the GA-list.
   In either case, I would like to remain on the list so that I may
   learn from those leaders.

   Thanks,
   John Havener

--------------------------------------

Date: Sun, 5 Aug 90 15:13:10 PDT
From: Tom Dietterich <tgd@turing.CS.ORST.EDU>
Subject: RE: GA's and concept learning

    It seems to me that the discussion of GA's and concept learning is
    slightly confused.

    First of all, any concept learning program must have a bias, and any
    bias will make some concepts learnable and others non-learnable.
    There is no "universal" bias that can learn everything (see
    Dietterich, 1989).  Hence, bias must be chosen on a problem-specific
    basis.  Most biases impose some kind of preference order over a space
    of possible hypotheses, and most learning algorithms then attempt to
    find the hypothesis that is consistent with the training data and
    most-preferred according to the bias.

    GA's constitute a family of optimization algorithms that can provide a
    way of IMPLEMENTING a bias (indeed, of implementing ANY bias).  They
    do not have any particular bias themselves.  Hence, it is meaningless
    to ask whether there are concepts that GA's can learn and other
    algorithms cannot.  Before such a question can be answered, we must
    first specify the bias (or fitness function) to be employed by the GA.

    There are some biases (e.g., maximally specific boolean conjunction)
    for which extremely efficient algorithms are known, so it is unlikely
    that GA's can beat them.  There are other biases (e.g., shortest DNF,
    3-layer neural networks), for which most algorithms are quite slow,
    and here, GA's need to be explored to see if they can provide a faster
    implementation. 

    [I myself doubt whether GA's can beat other algorithms in any specific
    case, because GA's must maintain a population of hypotheses, whereas
    most other algorithms incrementally adjust a single hypothesis.  GA's
    must repeatedly evaluate each member of the population against the
    training data, while other algorithms typically only must evaluate a
    single hypothesis---and they may be able to perform that evaluation
    incrementally.  Other algorithms often subdivide the training data and
    test hypothesis modifications on subsets, which is also quite
    efficient.  Maybe some of these ideas can be transferred to GA's?]

    Dietterich, T. G., (1989)  Limits of inductive learning.  In
    Proceedings of the Sixth International Conference on Machine
    Learning (pp.~124--128). Ithaca, NY.  San Mateo, CA: Morgan Kaufmann.

    Thomas G. Dietterich
    Department of Computer Science
    Computer Science Bldg, Room 100
    Oregon State University
    Corvallis, OR 97331-3902

------------------------------------

Date:     Tue, 31 Jul 90 11:33:43 EDT
From: Gilbert Syswerda <syswerda@BBN.COM>
Subject:  Re: Reflective GAs

  In Vol 4, Issue 13, John Grefenstette writes:

  >  Consider the following algorithm: maintain two separate populations of
  >  size N, call them A and B.  At each generation, evaluate all structures
  >  in both populations, apply an ordinary GA to population A, then form a
  >  new population B by taking the binary complement of each element in A.
  >  Keep track of the best structure to appear in either population.
     .
     .
     .
  >  There are many possible variations on this theme.  For example, it may
  >  be desirable to allow recombination across populations A and B for
  >  "partially deceptive" problems.


  It seems that what you are introducing is a mutation operator that has a
  100% chance of flipping a bit. This operator will create the complement of
  a parent, which for a fully deceived GA working on a fully deceptive
  problem will lead to the right answer. As you point out, this will not work
  for partially deceptive problems. It will also not work for problems that
  have independently deceptive subproblems; taking the complement will may
  fix one subproblem while undoing another.

  For deceptive subproblems of the kind where there is only one point in the
  subspace that is the max and the second best is on the other end of the
  subspace with everything else leading to it, the only way a GA will solve
  the problem is to guess the answer to that subproblem and hang on to it so
  that it can be combined with good guesses to other subproblems, arriving
  hopefully at the global max.

  Guessing is effectively accomplished by using a high mutation rate. Since
  we in general do not want the complement of all the bits and since in
  general we do not know which bits we want the complement of, we might want
  to flip an average of 50% of the bits, chosen randomly (uniform mutation!).

  Preserving our lucky good guesses while not clobbering the current
  population with a multitude of bad guesses is effectively done by using a
  steady state GA.

--------------------------------------

Date: Wed, 8 Aug 90 18:31:02 PDT
From: schraudo%cs@ucsd.edu (Nici Schraudolph)
Subject: more warnings on random number generators

  I am afraid this posting has nothing to do with Genetic Algorithms, although
  it may be useful to those of us who write their own simulators... to all
  others, my apologies.

  In v4n13 Raymond Greenwell (matrng@hofstra.bitnet) writes:

  > Users of random number generators may also be interested to know that
  > a recent survey of random number generators (Stuart L. Anderson, "Random
  > number generators on vector supercomputers and other advanced architectures,"
  > SIAM Review (32:2), June 1990, p. 221-251) found that, for many purposes,
  > the simple generator x(n+1)=x(n)*16807 mod (2^31-1) performed exceptionally
  > well on nonvector computers.

  Genetic Algorithms typically apply random number generators to rather high-
  dimensional spaces, at least when producing the initial population.  Con-
  gruence generators such as the one suggested above, however, are known to
  degenerate in such spaces due to the Marsaglia effect: for instance, all
  10-tuples drawn from ANY such generator will lie in at most 41 hyperplanes.

  I suspect Anderson's positive judgement is based on a low-dimensional test
  result (the qualification "on nonvector computers" might indicate that) and
  may not applicable to high-dimensional spaces.  I know of only two methods
  for high-dimensional random vector generation:

  1) Use more state than just the previous number to generate the new value.
     This is in effect what Knuth's subtractive generator adopted by Goldberg
     (p. 334 in his book) does.  On BSD Unix systems there is an excellent
     generator called "random()" that lets you specify how much past state
     should be used.

  2) Abandon the idea of randomness altogether and use a good quasi-random
     number generator such as the Halton/van der Corput sequence.  These
     generators produce deterministic but super-uniform sequences that
     outperform pseudo-random numbers in Monte Carlo integration tasks.
     Such a sequence could be used to generate an initial population that
     covers the search space as evenly as possible.

  Finally, I'll risk belaboring the obvious in warning to never use modulo
  division to map the numbers produced by a congruence generator onto a
  smaller integer range - the result is a highly predictable short-period
  sequence.  I managed by this method to write a gambling program that would
  always alternate even and odd dice rolls!  The proper treatment is of
  course to linearly map the numbers into the desired range.

  --
  Nicol N. Schraudolph, C-014                nici%cs@ucsd.edu
  University of California, San Diego        nici%cs@ucsd.bitnet
  La Jolla, CA 92093-0114                    ...!ucsd!cs!nici

--------------------------------------

Date: Fri, 27 Jul 90 14:00:21 PDT
From: stuart@ads.com (Stuart Crawford)
Subject: Re: pseudo-random number generators

    Readers of the recent posting about the dangers of some pseudo-random
    number generators may be interested in the following excellent generator. 

    Stuart Crawford
    Advanced Decision Systems

    ===================

    /*
    This Random Number Generator is based on the algorithm in a FORTRAN
    version published by George Marsaglia and Arif Zaman, Florida State
    University; ref.: see original comments below.

    At the fhw (Fachhochschule Wiesbaden, W.Germany), Dept. of Computer
    Science, we have written sources in further languages (C, Modula-2
    Turbo-Pascal(3.0, 5.0), Basic and Ada) to get exactly the same test
    results compared with the original FORTRAN version.
                                                          April 1989

                                      Karl-L. Noell <NOELL@DWIFH1.BITNET>
                                 and  Helmut  Weber <WEBER@DWIFH1.BITNET>
    */

[ED's NOTE:  The lengthy code listing has been omitted.  If you
             are interested, please contact the author. --Alan]

---------------------------------------

Date: Fri, 20 Jul 90 11:31:30 +0200
From: j_nordvik@cen.jrc.it (NORDVIK Jean-Pierre)
Subject: Applying GAs to Process Control

  I am currently working as a researcher at the Joint Research Center of the
  Commission of the European Communities at Ispra in Italy. My  interest in
  GAs lies in their application to Process Control.

  You  will find hereafter a short description of the research programme which
  I am responsible for.

  We are currently investigating the potential use of Genetic Algorithms in
  the area of Process Control. This activity is conducted as part of a larger
  study on Biological Adaptative Systems for Process Control which looks at
  Genetic Algorithms, Parallel Distributed Processing, and Immune Networks.
  The research is to consist of a comparative study of the above adaptative
  systems for the control of dynamic processes. Case studies of various levels
  of complexity will be explored using all three approaches. The performance
  of the resulting systems will be evaluated and compared. The result of the
  study will be: the establishment of a preliminary benchmark for the
  evaluation of these and other approaches, an advanced classification of the
  situations for which each approach (or simply the basic idea underlying the
  approach) seems to be well fitted, and some insights on what could be an
  ideal synergism between "biological proposals" and engineering requirements.

  My address in ITaly is as follows:

        NORDVIK Jean-Pierre
        JRC Ispra, Ed 21        
        I-21020 Ispra (Va)
        Italy           
        Tel.:   00-39-332-789111 Ext.: 5021     
        Fax.:   00-39-332-78.94.72
        E Mail: J_NORDVIK@CEN.JRC.IT

--------------------------------
End of Genetic Algorithms Digest
********************************

