
Genetic Algorithms Digest   Friday, August 16 1991   Volume 5 : Issue 25

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- Re: GAs vs. Bitwise Hillclimbers
	- Publication Opportunities in Machine Learning
	- ESPRIT III Proposal - Parallel GAs Environment & Applications

**********************************************************************

CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)

 IJCAI 91, International Joint Conference on AI, Sydney, AU   Aug 25-30, 1991
 First European Conference on Artificial Life (v5n10)         Dec 11-13, 1991
 Canadian AI Conference, Vancouver, (CFP 1/7)                 May 11-15, 1992
 10th National Conference on AI, San Jose, (CFP 1/15)         Jul 12-17, 1992
 ECAI 92, 10th European Conference on AI (v5n13)              Aug  3-7,  1992
 Parallel Problem Solving from Nature, Brussels, (CFP 4/15)   Sep 28-30, 1992

 (Send announcements of other activities to GA-List@aic.nrl.navy.mil)

**********************************************************************
----------------------------------------------------------------------

From: gref@AIC.NRL.Navy.Mil
Date: Thu, 15 Aug 91 10:26:14 EDT
Subject: Re: GAs vs. Bitwise Hillclimbers

   When is a hillclimber not a hillclimber?  In GA-List v5n22, I claimed
   that my example distinguished GAs from all hillclimbers.  As Dave Ackley
   points out in GA-List v5n23, if you take a broader view of hillclimbing,
   including algorithms that may or may not not take a move up the hill, my
   claim is overreaching.  For example, my may not always fool simulated
   annealing (SA) or Dave's Stochastic Hillclimber (SHC).  Dave also
   discusses some variations:

   > A small variation on John's function sharpens this point.  Suppose
   > that the bonus for all-ones groups was not 10, but 50.  In the
   > abstract, one might imagine such a change would not make much
   > difference: So the poles in the ditches are taller --- they're just as
   > rare as before, and they're still all in ditches.  But, with this
   > change, SHC can be run hotter than in John's function, and the
   > performance difference is dramatic:
   >
   > SHC (T=4.2)		[group bonus 50, data normalized to 100]
   >    Best (count)	Worst (count)	Mean	Std	Evals per run
   >   100.0 (97)		91.0 (1)        99.7    10.02   2100
   >
   > SHC found the optimum in 97 of 100 runs!  This function is EASY for
   > stochastic hillclimbing.  Why?  Because on the one hand, the slope
   > toward the misleading optimum is gradual enough compared to the
   > temperature that SHC moves nearly randomly up and down the ditch
   > walls, allowing it to stumble across the poles.  And on the other
   > hand, the poles are tall enough compared to the temperature that each
   > pole, once found, is very unlikely to be lost again.  These results
   > indicate that 2100 function evaluations are almost always plenty for
   > SHC to "lock onto" all ten poles.
   >
   > I haven't gotten GENESIS going on this variant yet.  I worry that if
   > anything it will do worse than it did on John's function, as
   > individuals with a few poles correct get a large fitness advantage and
   > drive out diversity in the other groups.  Perhaps someone more adept
   > than I with GENESIS can try it out and report back.  But in any case,
   > the basic point is clear: Simple Plateaus-type functions, even with
   > traps, fail to provide a qualitative distinction between GA's and HC's.

   I ran GENESIS (c_rate 0.6, m_rate 0.001) on this variation, with
   following results over 100 runs:


     Best (count)  Worst (count)   Mean    Std     Evals till best
     100.0 (52)    73.0 (1)        94.7    6.2       1691

   Since Dave tried problem-specific parameter settings for SHC, I tried a
   similar approach, raising the mutation rate from 0.001 to 0.025, and
   allowing up to 10000 trials, with the following results:

     Best (count)  Worst (count)   Mean    Std     Evals till best
     100.0 (95)    91.0 (5)        99.6    1.97     4518

   While this is competitive with SHC, there is probably some other
   variation (changing the bonus to 1000?) that might be worse.  But this
   gets into practical issues of premature convergence, such as limiting
   the maximum number of offspring, using rank-based selection, etc.

   This discussion shows up some of the difficulties in comparing GAs to
   other parameterized classes of algorithms, such as SA, SHC or the hybrid
   genetic hillclimbers that Dave described in his thesis.  I'm not sure
   what to make of a claim that a problem-specific set of parameters for
   algorithm class A beats a particular setting for class B.  In general,
   we have no way of finding the optimal parameters for these algorithms,
   so it is hard to get qualitative distinctions.  Any ideas out there?

   A final challenge: does anyone know if any (simple?) example that
   distinguishes GAs from simulated annealing?

   - John Grefenstette

------------------------------

From: dejong@AIC.NRL.Navy.Mil
Date: Fri, 16 Aug 91 09:31:23 EDT
Subject: Publication Opportunities in Machine Learning

   Several things have happened recently relating to publication 
   opportunities in the Machine Learning journal (Kluwer).

   First, the MLJ board has solicited another special issue on GAs 
   similar in spirit to the previous one, namely, to provide for the
   timely publication of extended versions of the high quality machine
   learning related research presented at ICGA-91.  Although the issue 
   is not restricted to work presented at ICGA-91, the time table is 
   quite tight even for submission of extended versions of conference 
   papers.  John Grefenstette has agreed to serve as the editor of this 
   special issue.  See his announcement below for specific details.

   Second, I have been asked by the MLJ board and have agreed to serve 
   as an "area" editor for GA papers.  This is intended to improve the
   reviewing and handling of GA-related machine learning papers, and
   make them part of the normal flow of MLJ articles (not just relegated
   to special issues).  I encourage you to submit articles for review at
   any time in the usual manner as indicated on the back page of MLJ issues.
   Kluwer will route GA-related papers to me.

   Articles submitted to John Grefenstette for the upcoming special
   issue will be automatically considered for normal MLJ publication
   if time and/or space constraints prevent them from appearing in the
   special issue.

   +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

			    MACHINE LEARNING
			    Special Issue on
		Genetic Algorithms for Machine Learning

   The MACHINE LEARNING journal (Kluwer) has scheduled a third Special
   Issue on Genetic Algorithms.  This one will be edited by John
   Grefenstette.  The special issue will focus on the application of GAs to
   problems that are of particular interest to the machine learning
   community, including concept formation from examples, conceptual
   clustering, explanation-based learning, and learning policies for
   sequential decision problems.  Other topics of interest include the
   applications of PAC analysis methods to GAs, and the relationships
   between GAs and classifier systems and other methods of reinforcement
   learning (e.g., Q-learning).  MACHINE LEARNING does not usually publish
   papers that concentrate on optimization per se, so papers that focus
   primarily on the application of GAs to numerical or combinatorial
   optimization problems might be better suited to another forum.  Papers
   dealing with "hard core" GA theory must be presented in a way that is
   accessible to the machine learning community.

   The ideal paper will make a theoretical contribution supported by a
   computer implementation.  In addition to carefully describing the
   learning component, it should also discuss knowledge representation and
   performance assumptions.  The article should carefully evaluate the
   approach through empirical studies, theoretical analysis, or comparison
   to psychological phenomena, and should discuss its relation to other
   work in machine learning.

   A primary motivation for this special issue is to provide for the timely
   publication of expanded versions of some of the high quality research
   reported at this summer's ICGA-91.  In order to meet this accelerated
   publication schedule, the following timetable has been adopted:

   1 Oct 91:		Manuscripts submitted to Guest Editor for review
   15 Nov 91:		Reviewed manuscripts returned to authors
   1 Jan 92:		Revised papers returned to Editor

   All papers submitted will be carefully reviewed.  Authors are asked to
   submit four copies (hard-copy only) of their paper by October 1, 1991 to
   the Guest Editor:

   John J. Grefenstette
   Navy Center for Applied Research in AI
   Code 5514
   Naval Research Laboratory
   Washington, DC 20375-5000
   USA
   email: gref@aic.nrl.navy.mil

------------------------------

From: RISTAU@math.fu-berlin.de
Date: Fri, 16 Aug 91 09:30:53 EDT
Subject: ESPRIT III Proposal - Parallel GAs Environment & Applications

   UCL (University College London), GMD (Dr. Heinz Muhlenbein) and Brainware
   GmbH are trying to form a consortium to submit a B-type proposal to
   ESPRIT III to demonstrate applications and develop a programming
   environment for Parallel Genetic Algorithms.

   Since we are restricted to a B-type project and ESPRIT considers four
   partners (excluding associated partners) to be the optimum size, we are
   now seeking 1 - 2 Spanish or Portuguese expert companies to join our
   consortium.  If your organisation is interested in this project, please
   tell us:

	  1.  your area of interest
	  2.  your experience in this area
	  3.  whether you are interested in becoming a Prime Contractor
	  4.  suggestions for other partners

   Please reply to the above telephone/fax numbers, of the following e-mail
   addresses:

   RISTAU@math.fu-berlin.de                  (until the 24th August, 1991)
   steffen@kristall.chemie.fu-berlin.dbp.de  (from the 24th August, 1991)

------------------------------
End of Genetic Algorithms Digest
******************************
