
Genetic Algorithms Digest   Tuesday, August 13 1991   Volume 5 : Issue 23

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- Re: GA's and Hill-Climbers
	- Questions on GA Robustness
	- CFP: Annals of Mathematics and Artificial Intelligence
	- CFP - Special Series in IEEE Expert

**********************************************************************

CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)

 IJCAI 91, International Joint Conference on AI, Sydney, AU   Aug 25-30, 1991
 First European Conference on Artificial Life (v5n10)         Dec 11-13, 1991
 Canadian AI Conference, Vancouver, (CFP 1/7)                 May 11-15, 1992
 10th National Conference on AI, San Jose, (CFP 1/15)         Jul 12-17, 1992
 ECAI 92, 10th European Conference on AI (v5n13)              Aug  3-7,  1992
 Parallel Problem Solving from Nature, Brussels, (CFP 4/15)   Sep 28-30, 1992

 (Send announcements of other activities to GA-List@aic.nrl.navy.mil)

**********************************************************************
----------------------------------------------------------------------

From: ackley@chatham (David H Ackley)
Date: Tue, 13 Aug 91 05:32:27 -0400
Subject: Re: GA's and Hill-Climbers

   In GA-List v5n22, building upon papers by Stewart Wilson and Dave
   Davis, John Grefenstette defines and explores a function that, he
   states, "distinguishes GAs from every bit-wise hill-climber."
   Although I believe GA's are distinguishable from HC's, and I believe
   that constructing paradigmatic function spaces is an important way to
   gain insight into optimization algorithms, it is important to
   recognize that functions such as the one John proposes actually
   distinguish GA's only from a subset of the change-one-bit-at-a-time
   hillclimbing algorithms.

   In terms of the functions I studied in [1], John's function is a
   "trap" variant of the "plateaus" function.  Bits are grouped, groups
   are independent of each other, and there is a per-group bonus for
   achieving a special state.  In Plateaus, the non-special states in
   each group all have value zero, producing a set of "telephone pole on
   a plain" problems, whereas in John's function they slope away from the
   bonus state, producing a set of nasty "telephone pole in a ditch" Trap
   problems.

   He tests two hillclimbers on this function, trying both steepest
   ascent and next ascent [1], and finds --- as expected --- abysmal
   performance for both of them.  Their absolute determination to go
   uphill makes it impossible for them to find the poles in the ditches
   unless they are started very close to the ditch bottoms.

   In [1], I observed similar results on the Plateaus function: In terms
   of the number of evaluations required to find the optimum, both
   steepest- and next-ascent hillclimbing were terrible.  The clear
   winner on Plateaus was stochastic hillclimbing (SHC).  After seeing
   John's note today, SHC seemed to be the obvious thing to try.

   Stochastic hillclimbing is just like regular hillclimbing except that
   instead of accepting a bit-flip move if and only if it leads uphill,
   it accepts a bit-flip move (uphill or down) with a probability that
   depends on the change in height.  SHC has only one parameter: T ---
   the "temperature" --- which governs how random the algorithm is.  I
   found that T=1.6 worked pretty well for John's 50-bit function (all
   data measured over 100 runs):

	      SHC (T=1.6)
	      Best (count)	Worst (count)	Mean	Std	Evals per run
	      100.0 (10)   67.0 (1)        88.8    8.95    2100

	      GENESIS (pop=100, xo=0.6, mut=0.001)     (Grefenstette's data)
	      Best (count)	Worst (count)	Mean	Std	Evals per run
	      100.0 (16)	78.8 (1)	91.8	5.57	2100

   This comparison shows that GENESIS, off-the-shelf, edges out SHC, even
   with problem-dependent choice of T.  However, it also shows that SHC
   can mount a very strong challenge to GENESIS on this function.  John's
   function, though an improvement on Plateaus for foiling deterministic
   hillclimbers, is NOT immune to hillclimbers in general.

   A small variation on John's function sharpens this point.  Suppose
   that the bonus for all-ones groups was not 10, but 50.  In the
   abstract, one might imagine such a change would not make much
   difference: So the poles in the ditches are taller --- they're just as
   rare as before, and they're still all in ditches.  But, with this
   change, SHC can be run hotter than in John's function, and the
   performance difference is dramatic:

	      SHC (T=4.2)		[group bonus 50, data normalized to 100]
	      Best (count)	Worst (count)	Mean	Std	Evals per run
	      100.0 (97)   91.0 (1)        99.7    10.02   2100

   SHC found the optimum in 97 of 100 runs!  This function is EASY for
   stochastic hillclimbing.  Why?  Because on the one hand, the slope
   toward the misleading optimum is gradual enough compared to the
   temperature that SHC moves nearly randomly up and down the ditch
   walls, allowing it to stumble across the poles.  And on the other
   hand, the poles are tall enough compared to the temperature that each
   pole, once found, is very unlikely to be lost again.  These results
   indicate that 2100 function evaluations are almost always plenty for
   SHC to "lock onto" all ten poles.

   I haven't gotten GENESIS going on this variant yet.  I worry that if
   anything it will do worse than it did on John's function, as
   individuals with a few poles correct get a large fitness advantage and
   drive out diversity in the other groups.  Perhaps someone more adept
   than I with GENESIS can try it out and report back.  But in any case,
   the basic point is clear: Simple Plateaus-type functions, even with
   traps, fail to provide a qualitative distinction between GA's and HC's.

   Researchers interested in pursuing that distinction should be familiar
   with the "multi-level functions" introduced in Chapter 5 of [1].
   Multilevel functions have bit groups and multiple optima within each
   group like the functions considered here, but in multilevel functions
   the groups are not independent of each other.  What makes the
   functions especially nasty is that the intra-group constraints are
   stronger than the inter-group constraints, so the slope of the land
   always favors cleaning up within groups, even if the cleaned-up group
   is wrong in terms of the inter-group constraints.  The hope was that
   only GA's, with recombination making possible large moves across the
   landscape, could bring all the correct group optima together.  What
   were the results?  Was a GA better or an HC?  The winner was a
   combination algorithm --- a genetic hillclimber.

	   -Dave

   [1] Ackley, D.H.  A connectionist machine for genetic hillclimbing.
   Kluwer Academic Press, Boston, 1987.  (Happily, still available;
   sadly, still too expensive --- ask your librarian for it)

------------------------------

From: Andrew J. Dymek <adymek@galaxy.afit.af.mil>
Date: Fri, 9 Aug 91 11:49:31 -0400
Subject: Questions on GA Robustness

   In his book, Adaptation in Natural and Artificial Systems,
   Prof. Holland proved the robustness of an adaptive plan called 
   script R1 (Holland, pages 124-132).  In this plan, only one
   individual is selected and has genetic algorithms applied
   to it per "time-step" (Holland, p.92).  The operators in 
   this plan are crossover, inversion, and mutation
   (Holland, p.92).  These three operators are necessary 
   (Holland, pages 124-132) and sufficient (Holland, p. 111)
   for robustness.

   I have looked through the conferences and have a copy of
   J. Grefenstette's Genesis Code and noticed that this adaptive
   plan is not used at all.  In most implementations, each "time-step"
   is a generation consisting of evaluation, selection, crossover,
   (sometimes) mutation, and (rarely) inversion.  Unless there is a
   "generation gap," the entire population changes in a time-step
   (DeJong's Dissertation, pages 77-78).  In his dissertation,
   DeJong does give some analytical arguments and empirical evidence
   that such an adaptive plan is effective on optimization problems,
   yet I did not see a similar proof of robustness.  In particular,
   I noticed that DeJong's plan R1 did not have an inversion operator,
   which Holland said was necessary for robustness. 

   Is there any proof that such an adaptive plan is robust?

   Is inversion not generally used because it is difficult to implement?

   Are most users of genetic algorithms not interested in robustness,
   but rather in obtaining the answer to one particular instantiation
   of a problem?  Is this why they give no justification for diverging
   from Holland's theory?

   Is the reason most implementations do not use Holland's canonical
   genetic algorithm is that experimental evidence shows that adaptive
   plans similar to DeJong's give better/faster answers?  I could find
   no head-to-head comparison.

   If anyone could help clear up my confusion, I would greatly appreciate
   it.

   Andrew Dymek

------------------------------

From: liepins <liepins@utkux1.utk.edu>
Date: Wed, 7 Aug 91 21:53:18 -0400
Subject: CFP: Annals of Mathematics and Artificial Intelligence

			 Call for Papers

	  Annals of Mathematics and Artificial Intelligence

   We're almost through with the refereeing process for the first special
   issue of the Annals of Mathematics and Artificial Intelligence devoted
   to Genetic Algorithms.  That means that it's time to start thinking
   about the second issue.  First, I'd like to thank all those who refereed
   papers for this issue!  Without you, this issue would not have been 
   possible!

   Papers are being solicited for a second special issue of the Annals of
   Mathematics and Artificial Intelligence devoted to the mathematical
   foundations of genetic algorithms. Topics of interest include but are
   not limited to: 

   . constrained optimization results
   . finite sampling properties
   . dynamic properties
   . stability and fixed points
   . convergence results
   . characterization of easy and difficult problems
   . stopping rules
   . complexity analysis

   All submissions will be fully refereed.  Submission deadline is February
   7, 1992.  We hope to have the issue out by early 1993.  Please send four
   copies of all manuscripts to

			    Dr. Gunar Liepins
			    MS 6360 Bldg 6025
		      Oak Ridge National Laboratory
			      PO Box 2008
			Oak Ridge, TN 37830-6360

------------------------------

From: gref@AIC.NRL.Navy.Mil
Date: Thu, 8 Aug 91 07:23:54 EDT
Subject: CFP - Special Series in IEEE Expert

				 IEEE Expert
			      Special Series on
		  Genetic Algorithms and Their Applications


   IEEE Expert announces a Special Series on Genetic Algorithms and Their
   Application, edited by John Grefenstette.  A Special Series is a a
   collection of articles united by a theme that will run over several
   issues.  Current Special Series in IEEE Expert include object-oriented
   programming in AI, AI applications in process systems, functional
   modeling of devices, machine learning applications, and connectionist
   applications.

   For the Special Series on Genetic Algorithms and Their Application,
   articles are sought on the following topics: characterizations of
   appropriate problems for genetic algorithms, comparisons with other
   algorithms for search and learning, implementation issues,
   representation issues, parallel processing with genetic algorithms, and
   applications.

   IEEE Expert is a magazine of applied AI, not a transactions or a
   journal.  The Magazine is a bridge between the research community and
   the user community.  It aims to publish original articles that transfer
   to the user community ideas and tools that come out of research and
   development.  Clear, not overly formal, writing is essential.  Its
   readers are users, developers, managers, researchers, and purchasers who
   are interest in databases, expert systems, and artificial intelligence,
   with particular emphasis on applications.  They want to learn about the
   tools, techniques, concepts, aids, and systems that have potential for
   real-world applications.  Conceptual or theoretical articles are
   welcome, provided they serve the above function of clarifying and
   presenting ideas of potential importance to applications.

   Submissions should be written according the IEEE Expert style.  The
   final articles should be about 8-9 printed pages, with about 10-12
   references.  All articles submitted will be carefully reviewed.
   Articles accepted on technical grounds are subject to copy editing by
   the Managing Editor's staff for clarity and expressiveness.  Authors are
   asked to submit six copies (hard-copy only) of their article by October
   1, 1991 to the Guest Editor:

	   John J. Grefenstette
	   Navy Center for Applied Research in AI
	   Code 5514
	   Naval Research Laboratory
	   Washington, DC 20375-5000
	   USA
	   email: gref@aic.nrl.navy.mil

------------------------------
End of Genetic Algorithms Digest
******************************
