
Genetic Algorithms Digest   Monday, August 26 1991   Volume 5 : Issue 26

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:

	- GA vs Simulated Annealing
	- On x-uniform crossover where x<<0.5
	- Applications of AI to science
	- Request for Grefenstette citations
	- Rick Riolo's source code
	- Need information on GA-related conferences

**********************************************************************

CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)

 IJCAI 91, International Joint Conference on AI, Sydney, AU   Aug 25-30, 1991
 First European Conference on Artificial Life (v5n10)         Dec 11-13, 1991
 Canadian AI Conference, Vancouver, (CFP 1/7)                 May 11-15, 1992
 10th National Conference on AI, San Jose, (CFP 1/15)         Jul 12-17, 1992
 ECAI 92, 10th European Conference on AI (v5n13)              Aug  3-7,  1992
 Parallel Problem Solving from Nature, Brussels, (CFP 4/15)   Sep 28-30, 1992

 (Send announcements of other activities to GA-List@aic.nrl.navy.mil)

**********************************************************************
----------------------------------------------------------------------

From: khushro@zip.eecs.umich.edu
Date:  Sat, 17 Aug 91 19:12:53 EDT
Subject: GA vs Simulated Annealing

   > A final challenge: does anyone know if any (simple?) example that
   >  distinguishes GAs from simulated annealing?

   That might be a meaningless question, since, given an algorithm, "my_alg"
   we can always find a problem that is my_alg_hard.  GA and SA use different
   search methods.  The GA works by hierarchically finding and concatanating
   small subsolutions (schemata) embedded in all the full solutions (members
   of the population) tried out (at least according to SBBH, and sorry to
   bore the GA community with this explanation) SA, on the other hand works
   by hierarchically descending valleys and subvalleys in the cost function,
   as the temperature is lowered. There was a recent paper that said that SA
   will work well if the search space is fractal, so SA-hard problems can be
   concocted on that basis.

   One major difference between the performance of GA and SA in engineering
   optimization problems is evaluation time. I will illustrate with the TSP.
   SA can work by moving only one city in the tour at a time, and has to do
   only incremental evaluations, ie evaluate only 6 new edges to evaluate a
   new (say 1000 city) tour. This is a constant-time inner loop. For the GA,
   each new evaluation is linear time.  We can say that SA correspondingly
   gets incremental improvements, whereas the GA can potentially get major
   improvements in each evaluation, but, in practice, this usually goes
   against the GA, at least in the second half of the run, when the
   population is moderately good, and most of the offspring are exhaustively
   evaluated and thrown away.  A "fix" for the problem is to reduce the
   number of bits taken from one parent as the run proceeds, and try
   incremental evaluation, but then the GA will become more and more SA-like,
   with no temperature.

   Therefore, any comparison between GA and SA in evaluation-intensive 
   problems is unfair without a comparison of the CPU time. 

   Of course, one can always think of other problems in which SA can't make
   incremental evaluations, and therefore, the GA is not at a disadvantage,
   eg, optimization of turbine parameters, where the entire turbine has to
   be simulated for evaluation.

   Does any one have a comparison for GA vs SA for the TSP (with incremental
   evaluation for SA, and run time stats)?

   >In general,
   > we have no way of finding the optimal parameters for these algorithms,
   > so it is hard to get qualitative distinctions.  Any ideas out there?

   Some work has been done on deriving the optimum temperature schedules for
   SA by sampling the search space.

   -KHUSHRO

------------------------------

From: christer@cs.umu.se
Date:  Sat, 24 Aug 91 16:52:03 +0200
Subject: On x-uniform crossover where x<<0.5

   I have yet to receive my copy of the ICGA91, and have therefore not read
   Spears and DeJong's paper "On the virtues of parameterized uniform
   crossover so bear with me if this is covered by their paper (which,
   judging from the title,it probably is).

   Mutation is normally implemented as an operator sweeping along the
   individual and with a predefined probability flips the bit at the current
   position. This makes the number of mutated bits a function of the
   chromosome length.  Traditional crossover on the other hand, is
   implemented as an operator which with a predefined probability cuts the
   two parent individuals at one, two or some other predefined number of
   positions and then exchanges the resulting substrings. The number of cuts
   is independent of the chromosome length. What if we made the crossover
   operator a function of the chromosome length just as the mutation operator
   is? Well, the uniform crossover operator is a function of the chromosome
   length, but it seems that everyone is using the 0.5-uniform crossover only
   (this is not really true, see below). What performance could we expect
   from uniform crossover using a different 'string change' probability?

   In "An adaptive crossover distribution mechanism for genetic algorithms"
   (ICGA87) Schaffer and Morishima presented an experiment in which they
   added information to the chromosome used by the crossover operator to
   determine where and how many cuts that would be made. The results
   presented in table 4 implies that 1 crossover is made approximately every
   24 bits. This would be equal to a 0.04-uniform crossover.

   The new parameter settings suggested by Schaffer (again) et al in "A study
   of control parameters affecting online performance of genetic algorithms
   for function optimization" (ICGA89) would, based on the functions f1-f10
   used, suggest that 1 crossover is made approximately every 35 bits, which
   equals a 0.03-uniform crossover.

   However, the only study that involves a uniform crossover (different from
   the 0.5-uniform crossover) I know of, "Biases in the crossover landscape"
   (ICGA89) by Schaffer (yet again!) et al., implies that 0.25-uniform
   crossover is _worse_ than 0.5-uniform crossover, and that 8-point
   crossover is the "best" crossover. With the functions and parameter
   settings used for this study, 8-point crossover would approximately equal
   a 0.16-uniform crossover.

   Without having done any empirical (nor theoretical) studies I would be
   tempted to say that a uniform crossover with a low 'string cross'
   probability could turn out to perform very well, if it weren't for the
   somewhat contradictory result just mentioned.

   Have someone studied this? (is this what DeJong/Spears writes about in
   their paper?) Comments, anyone?

   | Christer Ericson                            Internet: christer@cs.umu.se |
   | Department of Computer Science, University of Umea, S-90187 UMEA, Sweden |

------------------------------

From: HCART%VAX.OXFORD.AC.UK@VTVM2.CC.VT.EDU
Date: Mon, 12 AUG 91 09:12:41 BST
Subject: Applications of AI to science

       As background information for a proposed book, I am collecting details
   on current, and projected, applications of Artificial Intelligence in
   science.  I am keen to hear from any worker (academic, commercial or
   other) who is working on, or plans to work on what they would regard as an
   application of AI to such fields as chemistry, physics, the life sciences,
   image analysis, etc.

       I do not need extensive information on how particular problems are
   tackled - just an outline of the problem, and how you see it being
   resolved using AI.  Nor am I interested only in problems in which AI can
   be shown to be of definite value; I would also be interested to hear of
   problems which you think might be amenable to AI methods, even if no one
   has tried using AI on them yet.

       Any material to be published - which will be described in the most
   general terms only - will be cleared first with the originator.

       I would be grateful for any information the net can provide.

       Thanks.

       Dr. Hugh Cartwright,  Physical Chemistry Laboratory, Oxford University,
                             Oxford, UK OX1 3QZ.  Tel (0865)-275483/275400
                                                  FAX (0865)-275410
                             e-mail  HCART@vax.oxford.ac.uk

------------------------------

From: wiley@aic.nrl.navy.mil (Cathy Wiley)
Date: Thu, 15 Aug 91 8:18:45 EDT
Subject: Request for Grefenstette citations
  
  I am Cathy Wiley, Librarian for the Navy Center for Applied Research in
  AI, and I have been asked to perform a citation count for John
  Grefenstette.  If you have published a paper that included a reference
  to one or more of Dr. Grefenstette's papers, please send me:
  
  (1) a copy of your paper by FAX to (202) 767-3172,
  
  OR
  
  (2) the complete citation of your paper, and the list of the references
  to Dr. Grefenstette's papers in your paper's bibliography, by email to:
  WILEY@AIC.NRL.NAVY.MIL,
  
  OR
  
  (3) a copy of your paper by surface mail to:
  
  	Cathy Wiley
  	Librarian, NCARAI
  	Code 5510
  	NRL
  	Washington, DC 20375-5000
  
  Thanks for your help.
  
  - Cathy Wiley

------------------------------

From: sterritt@mrj.com (Chris Sterritt)
Date: Tue, 13 Aug 91 22:17:05 EDT
Subject: Rick Riolo's source code

   If you have a moment, I'd appreciate a possible answer to a question I
   have about some source code.  I remember a couple of years ago, there was
   much talk about Rick Riolo's source code for his classifier system; I
   believe that was in support of a book?  At the time, he was swamped with
   requests for the code, and was in the process of finishing up some work,
   and so I didn't ask him for it then.  Is it still available?  Has he
   written the book?  Is the code available on the net, or is he still making
   floppy disks?  Is there some better classifier-type code available (I have
   GENESIS, of course!)

	thanks very much in advance,
	chris sterritt
	sterritt@mrj.com

------------------------------

From: Mauro Manela <M.Manela@cs.ucl.ac.uk>
Date: Wed, 14 Aug 91 12:31:11 +0100
Subject: Need information on GA-related conferences

   Please could someone kindly forward me details on forthcoming GA related
   conferences and workshops in particular the following:

   Canadian AI Conference, Vancouver,
   ECAI 92, 10th European Conference on AI
   SAB92
   ML92

   I am interested in submission details like submission dates and contact
   addresses (email). Please mail this information directly to me on the
   following address: M.Manela@uk.ac.ucl.cs

   Thanks in advance,

	Mauro Manela
	Department of Computer Science
	University College London
	Gower Street	WC1E 6BT
	London

------------------------------
End of Genetic Algorithms Digest
******************************
