Genetic Algorithms Digest   Friday, 9 December 1988    Volume 2 : Issue 24

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- Administrivia & Reminder about Conference Deadline
	- Re: GAs for control systems
	- Alternative knowledge representations for GA learning (2)
	- GAs and neural nets (2)

--------------------------------

Date: Thu, 8 Dec 88 13:34:21 EDT
From: GA-List Moderator <GA-List-Request@AIC.NRL.NAVY.MIL>
Subject: Administrivia & Reminder about Conference Deadline

Sorry for the long delay between postings.
The host at AIC.NRL.NAVY.MIL has had serious problems
with its network connection since mid-November.
Please re-submit any messages that got bounced back to
you since then.

Reminder:  There are 17 shopping days till Christmas and
63 writing days till the submission deadline for ICGA 89!!
Please submit your work!

-- John

--------------------------------

From: nvuxh!hall@bellcore.bellcore.com
Date: 14 Nov 1988  16:23 EST
Subject: Re: GAs for control systems

POWELL DAVID J <POWELL@crd.ge.com> writes:
> [Stuff about writing a general program to optimize parameters using
> domain-specific rules of thumb, and now interested in adding GA
> as general heuristic. Has two questions:]
>	1. How to speed up GA if I have some rules of thumb without
>	forcing the GA to a local optimum?

Good question.  GA's have a problem with prematurely converging to
local optima even without rules of thumb.  Domain-specific heuristic
genetic operators can rapidly create a "superindividual" whose
genetic material would swamp the population and cause the GA
to converge to this local optima.  I see three approaches to
circumvent this problem: 

   A) TWEAKING: If you must use domain-specific operators, make them
      "tweaking" operators, which result in only small changes
      in the representation.  A GA can then efficiently search, combining
      mutations which occurred on different individuals.  One must be
      careful with probability of applying these tweaking operators;
      if the probability is too large, you will make large changes quickly
      and drive the GA to a local optima.  One must also be careful
      to have a coherent solution space.  In other words,
      small changes in your representation (relative to your genetic
      operators) should not cause large changes in evaluations.
      For supporting empirical evidence and additional insights,
      see the soon-to-be-published paper "Knowledge Based Assistance
      to Genetic Search in Large Design Spaces" by Peter Clitherow
.      at my company, Bellcore.  One piece of advise directly from Peter: 
      "Seemingly 'good' heuristic changes can in fact be specious,
      and result in useless search!"

   B) SPECIATION: If you must use "strong" domain-specific operators,
      which can create "superindividuals", encourage the formation
      of niches as a means of preserving population diversity. 
      Niches will result if you make the fitness of an individual a
      function of its "worth" AND the similarity of this individual
      to the rest of the population.  A unique, good individual will
      have a very high fitness; however, as its genetic heritage begins
      to spread, it will become similar to the rest of the population,
      and its fitness will tend towards the average.  The number of
      individuals in a niche will be roughly proportional to the value
      of a solution in that niche.  Similarity can be genotypic or
      phenotypic.  See "Genetic Algorithms with Sharing for
      Multimodal Function Optimization" by Goldberg and Richardson
      in the Proceedings of the Second International Conference on
      GA's. They define "sharing functions" as a means of reducing
      the fitness of an individual according to its similarity to other
      individuals.  Unfortunately, they offer little insight as to
      how to choose a sharing function, so you would probably have
      to experiment a bit to make it work for you.

   C) ARGOT: Use the ARGOT algorithm, which adapts its
      representation and never converges.  See "The ARGOT Strategy:
      Adaptive Representation Genetic Optimizer Technique" by
      Shaefer in the Second Proceedings.

"Tweaking" is probably more limiting than you would hope, since it
sounds like you want a general mechanism that can use powerful
domain-specific heuristics.  "Speciation" would probably work well,
however the extra overhead for the extra robustness should be
considered.  Also, since you desire a general mechanism, you will
have to define a general sharing function based on genotypic
similarity, which may or may not work well for most optimization
problems.  This approach has the added advantage that the GA will
produce multiple solutions when optimizing multimodal functions. 
This is probably of value for your system; it could present several
different solutions in order of their values for the human user to
examine.  Finally, "ARGOT" is probably a lot more than you want to
get into.

>
>	2. Proper selection of 	population size and parameter settings.
>

You've got a program that optimizes parameters... use it!  Parameter
settings of GA's have been set using GA's, but hill-climbing and
random search would probably suffice.  You mentioned you have
GENESIS - it gives typical values of parameters as defaults, so you
can start there.  Within a reasonable range of values, the
parameters have limited effect on performance, so don't worry about
finding the optimal setting.  Furthermore, the optimal settings
change over the course of a GA run.  Crossover is most valuable
early to produce good solutions; mutation is most valuable later
to optimize already good solutions.

>I have also recently read about Classifier systems as described by
>Holland in Machine Learning and his book on Induction. He seems to
>address the solution to my problems with his classifier system. 

Ack. Barf. Pooey. No. You have a classic function optimization problem.
Classic GA's (e.g. GENESIS) are designed for function optimization. 
Absolutely do not use a classifier system.  Classifier systems are
designed to induce programs, not to induce parameters. Theoretically,
classifier systems could induce parameters via inducing programs that
output parameters, but practically speaking, forget it!  [I would
generally advise against Michigan approach classifier systems for
program induction as well, but that's another story.]

> My mail address is powell@crd.ge.com.

Mine is hall%nvuxh.UUCP@bellcore.COM

Michael R. Hall

--------------------------------

Date: 2 Nov 88 23:48:00 GMT
From: brian@caen.engin.umich.edu (Brian Holtz)
Subject: Alternative knowledge representations for GA learning

[From AIList: ]

Does anyone know of any references that describe classifier systems whose
messages are composed of digits that may take more than two values?
For instance, I want to use a genetic algorithm to train a classifier
system to induce lexical gender rules in Latin.  Has any work been done
on managing the complexity of going beyond binary-coded messages, or
(better yet) encoding characters in messages in a useful, non-ASCIIish way?
I will summarize and post any responses.

--------------------------------

Date: Thu, 17 Nov 88 12:45:04 +0100
From: mcvax!dit.upm.es!lfs@uunet.UU.NET (Luis Fernando Solorzano Corral)
Subject: Alternative knowledge representations for GA learning

Does anyone know of any references that describe Darwinism Machine
Learning (Genetic Learning Algorithms) applied to other knowledge
representation approaches different from classifier systems?

Please send me e-mail if you can give me any kind of help.

----------------------------------------------------------------------------
Luis F. Solorzano        			E-Mail:  lfs@dit.upm.es
PhD Candidate                  

Dpto. Ingenieria Telematica                 
ETSI Telecomunicacion				tel: +34 1 4495700 
Ciudad Universitaria				     +34 1 4495762  
E-28040  MADRID            		  	fax: +34 1 2432077
SPAIN                      		        tlx: 47430 ETSIT E
----------------------------------------------------------------------------

--------------------------------

Date: Wed, 7 Dec 88 11:41:49 GMT
From: Nick Radcliffe <njr%itspna.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: GAs and neural nets

I am about to begin an investigation of Genetic Learning Algorithms
for layered, feed-forward Neural Networks and would appreciate any
information/comments/references anyone has about similar work.
Specifically, the process of training a Neural Network amounts to the
selection of an optimal set of weights (or connection strengths)
between the neurons it comprises.   A major subtlety that I see in
this problem is that in fully connected nets (and to a lesser degree
in partially connected nets) with n hidden nodes there are n!
equivalent optimal solutions which may be generated by permuting
the labels of the hidden units.   It seems to me very likely that
if no action is taken to try and overcome this the crossover operation
will be very destructive unless cleverly implemented.
This is because even crossing over two equivalent networks which
use different labellings will not, in general, generate another
equivalent network.   I have various ideas about how to overcome
this, but it would clearly be silly to reinvent the wheel.

The only papers I am aware of in this area are one by Darrell Whitley
(Applying Genetic Algorithms to Neural Network Problems:
A Preliminary Report), and one that takes a hierarchical approach
by Eric Mjolsness, David Sharp and Bradley Alpert
(Scaling, Machine Learning and Genetic Neural Nets).
Neither of these addresses the permutation problem directly.

All help will be appreciated.

Nick

------------------------------------------------------------------------

Nick Radcliffe
Theoretical Physics
The King's Buildings
Edinburgh University
Edinburgh
Scotland

(031) 667 1081 x 2850

JANET: NickRadcliffe@uk.ac.ed

--------------------------------

Date: Tue, 6 Dec 88 13:55:13 PST
From: mcvax!enidbo!daniele@uunet.UU.NET (Daniele Montanari)
Subject: GAs and neural nets

I work with an Artificial Intelligence research group at Enidata, an Italian
company.  We are interested in neural networks (backprop, Hopfield) and
classifier systems (and more generally genetic algorithms).  I would like to
have my name on the GA mailing list.

Ciao

Daniele Montanari
Enidata
Bologna - Italy

--------------------------------

End of Genetic Algorithms Digest
********************************

