
Genetic Algorithms Digest    Friday, 5 January 1989    Volume 4 : Issue 1

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- Variable string length
	- GA/NN/AI review articles
	- Re: GA for ANN

******************************************************************************

CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)

IJCNN Session on Evolutionary Processes (v3n10)               Jan 15-19, 1990
Double Auction Tournament - Sante Fe Institute  (v3n12)       Mar 1990
Workshop on GAs, Sim. Anneal., Neural Nets - Glasgow (v3n15)  May 9, 1990
7th Intl. Conference on Machine Learning (submissions 2/1/90) Jun 21-23, 1990
Workshop Foundations of GAs (v3n19)                           Jul 15-18, 1990
Conference on Simulation of Adaptive Behavior, Paris (v3n21)  Sep 24-28, 1990

(Send announcements of other activities to GA-List@aic.nrl.navy.mil)

******************************************************************************

--------------------------------

Date: Fri, 29 Dec 89 15:28:51 -0500
From: androula@lips.ecn.purdue.edu (Ioannis Androulakis)
Subject: Variable string length

   I am interested in any work that is related to the use
   of strings that are able to vary their length during the
   course of the simulation. Sofar I am aware only of Smith's
   work (PhD, 1980).
   In your opinion, how effective can the search be, since
   we perform a search in two different directions, not only
   we are searching for the optimum size but also we are looking
   for the optimum performance of each of the sizes. The implication
   I considere is based on the fact that there is an "infinite"
   number of hyperplanes that we need to look, as far as their
   dimensionality is concerned. What I mean by infinite is that
   when we are given a string of length L, we know that the
   dimensions of the hyperplanes will be 1,2,...,L-1. But when L
   is also a variable to be optimized then it is not very clear
   how the upper bound in the dimensionalities is defined.
   I would appreciate any help,
   Thank you,
   Ioannis P. Androulakis

   e-mail : androula@lips1.ecn.purdue.edu

--------------------------------

Date: Fri 29 Dec 89 13:23:03-PST
From: COLOMBANO@PLUTO.ARC.NASA.GOV
Subject: GA/NN/AI review articles

I am looking for good review articles relating GA with Neural Nets
and/or GA with symbolic AI. Any suggestions?
- Silvano Colombano
colombano@pluto.arc.nasa.gov 

--------------------------------

Date: Sat, 30 Dec 89 13:33:12 PST
From: David Rogers <drogers@riacs.edu>
Subject: Re: GA for ANN

A few comments on a (rather old) message from Rik:

    Date: Thu, 9 Nov 89 12:53:09 PST
    From: rik%cs@ucsd.edu (Rik Belew)
    Subject: Re: GA for ANN (2)
 
    >Let me begin with Hammerstrom's analysis, related in [rudnick@cse.ogc.edu
    >26 Oct 89].  His basic point seems to be that connectionist algorithms (he
    >uses NetTalk as his example) take a long time, and putting an evolutionary
    >outer loop around them can only make matters worse.  

The key problem here is one of the evaluation function, which (by putting
GAs on as the "outer loop") is the backprop algorithm.  As long as the 
GA is relying on some sort of backprop evaluation, Hammerstrom's point
has some merit.
 
Rik argues that the "outer loop" problem is not a new one, that it already
existed before GAs, and thus you may as well use GAs to do it right:

    >Consequently most investigators do use an 'outer-loop' iteration, 
    >using multiple restarts to improve their confidence in the solution 
    >found.  The Genetic Algoritms (GAs) can help with these connectionist 
    >problems.

Which is true, as long as we accept the structure of BP inner-loop
and GA outer-loop as the proper one.  That's why my work focuses on 
including GAs *inside* a NN structure, to operate *concurrently* with 
other algorithms inside of a single network.  Much of Hammerstrom's
argument is not applicable once you review different ways to combine
GAs with NNs.  To again quote Rik:

    >There are a tremendous number of ways the techniques can be combined,
    >with the GA as simply an outer loop around a conventional BP simulation
    >being one of the least imaginative. 

Certainly!  Let a thousand flowers bloom.  Another comment:
 
    >Bridges complains that we are compounding ignorance
    >when we try to consider hybrids of connectionist and GA algorithms
    >[clay@cs.cmu.edu 7 Nov 89].  But we are beginning to understand the basic
    >features of connectionist search (as function approximators, via 
    >analysis of internal structure,etc.), and there are substantial 
    >things known about the GA, too (e.g., Holland's Schema Theorem and 
    >its progeny).  These foundations do suggest deliberate research 
    >strategies and immediately eliminate others.

Again strong agreement.  I don't agree with the "compounding ignorance"
statement;  as an experimental scientist, I feel one shouldn't be afraid
to tread beyond the current limits of theory.  But that's not an argument
for random search:  there are good reasons why certain architecture
may hybridize well (in my work, Kanerva's sparse distributed memory and
GAs), and it seems a poor idea to wait for the theory before the 
experiments begin.  In any case, I doubt that anyone would DO the 
theory, unless propelled by interesting experimental results.

David Christopher Rogers
drogers@riacs.edu

--------------------------------

End of Genetic Algorithms Digest
********************************

