
Genetic Algorithms Digest    Wednesday, 1 November 1989    Volume 3 : Issue 17

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- New feature: calendar of GA activities
	- TR available: Evolution, Learning and Culture too!
	- GA for ANN design:  questions


***************************************************************************

CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)

1-2 Dec 89 - NIPS Workshop: NNs and GAs (v3n16)
15-19 Jan 90 - IJCNN Session on Evolutionary Processes (v3n10)
Mar 90 - Double Auction Tournament - Sante Fe Institute  (v3n12)
9 May 90 - Workshop on GAs, Sim. Anneal., Neural Nets - Glasgow (v3n15)
21-23 Jun 90 - 7th Intl. Conference on Machine Learning (submissions 1 Feb 90)

(Send other activities to GA-List@aic.nrl.navy.mil)

***************************************************************************
--------------------------------

Date: Tue, 24 Oct 89 19:33:27 PDT
From: rik%cs@ucsd.edu (Rik Belew)
Subject: TR available: Evolution, Learning and Culture too!

				   
		   EVOLUTION, LEARNING AND CULTURE:
	   Computational metaphors for adaptive algorithms
				   
			   Richard K. Belew
	      Cognitive Computer Science Research Group
		Computer Science & Engr. Dept. (C-014)
		    Univ. California at San Diego
			  La Jolla, CA 92093
			  rik%cs@ucsd.edu
				   
		    CSE Technical Report #CS89-156

Potential interactions between connectionist learning systems and
algorithms modeled after evolutionary adaptation are becoming of
increasing interest.  In a recent, short and elegant paper Hinton and
Nowlan extend a version of Holland's Genetic Algorithm (GA) to
consider ways in which the evolution of species and the learning of
individuals might interact.  Their model is valuable both because it
provides insight into potential interactions between the {\em natural}
processes of evolution and learning, and as a potential bridge between
the {\em artificial} questions of efficient and effective machine
learning using the GA and connectionist networks.  This paper begins
by describing the GA and Hinton and Nowlan's simulation.  We then
analyze their model, use this analysis to explain its non-trivial
dynamical behaviors, and consider the sensitivity of the simulation to
several key parameters.

Our next step is to interpose a third adaptive system --- culture ---
between the learning of individuals and the evolution of populations.
Culture accumulates the ``wisdom'' of individuals' learning beyond the
lifetime of any one individual but adapts more responsively than the
pace of evolution allows.  We describe a series of experiments in
which the most minimal notion of culture has been added to the Hinton
and Nowlan model, and use this experience to comment on the functional
value of culture and similarities between and interactions among these
three classes of adaptive systems.

	-------------------------------------------------------

Copies of this technical report are available by sending $3 (and
asking for Technical Report #CS89-156) to:
	Ms. Kathleen Hutcheson
	CSE Dept. (C-014)
	Univ. Calif. -- San Diego
	La Jolla, CA 92093

--------------------------------

Date: Thu, 26 Oct 89 12:11:53 PDT
From: rudnick@cse.ogc.edu (Mike Rudnick)
Subject: GA for ANN design:  questions

My advisor, Dan Hammerstrom, recently read the Harp, Samad, and Guha
Honeywell tech report.  Dan is not a GA person and is skeptical of the
practicality of using GA for ANN (artificial neural networks) design.
Of course, GA for ANN design may still be of interest even if it is
computationally impractical, but THE central question for me is
computational feasibility.

Dan posed a hypothetical (but not unrealistic) GA simulation which
raises some interesting issues and questions regarding the realistic,
practical use of GAs for the design of ANNs.  I'm not well enough read
in the GA literature to have a good feeling for what the answers are,
or even if answers are known.  Hence, I'm posting this query to get
feedback and generate discussion about the prospects for using GA for
ANN design.  Below is my paraphrase of Dan's hypothetical simulation.
It is based on the Harp, et. al., approach to ANN design, but the
issues raised are generic.

> Assumed we are trying to design an ANN of roughly the size of NetTalk,
> which has approximately 20,000 connections.  Assumed a population of
> 1000 networks which is about 30x larger than the population Harp & co.
> assumed for their character recognition network (Harp had a population
> size of 30).  Although NetTalk is only 10x the size of that network, I
> felt that the population size had to grow somewhat faster than network
> size.  I also assumed about 10,000 epochs would be required to achieve
> the right network via the GA search.  This is about 200x more epochs
> than the OCR example, but NetTalk is 10 times larger and I have
> trouble believing that these simulations scale linearly, so 10,000
> generations did not seem unreasonable.
> 
> NetTalk required 60,000 training vectors, so if each member of each
> population must be trained that long and if it takes say, 10
> operations per connection in training mode (that is typical in
> back-propagation), you get (after multiplying 1000 x 20000 x 60000 x
> 10000 x 10) about 10 ^ 17 operations.  If you have a Cray II which is
> about a billion operations per second, you can figure it will take on
> the order of a year.  Even the most massively parallel machines
> envisioned on the horizon will not come close to doing something like
> this in a few days of processing.  And this is with a moderately sized
> network!  Even with good algorithms you might knock off a couple
> orders of magnitude, but then as networks get larger you'll lose that
> gain rapidly.

Now for the questions.  Please keep in mind the hypothetical
simulation described above, as that is the motivation for these
questions.

1) In most of the GA literature I've read, population size has been
kept in the range of 30 to 50 individuals, although some simulations
have used far larger populations (eg, Hillis' co-evolution simulation
presented at the Emergent Computation conference).  Should/does
population size be scaled with problem difficulty, and if so, how
should/does it scale?

2)  How does the number of epochs (GA population iterations) scale
with both population size and problem difficulty?

I've been assuming in my thinking that the population size would stay
in the 30 to 50 range even while the ANN complexity (ie, problem
complexity) increased.  Is this naive?

Both 1) and 2) can be viewed as part of the more general question: How
does computational complexity scale as the ANN problem difficulty
increases, or more specifically, as ANN size increases?

Answers?  Observations?  Comments?

Mike Rudnick			CSnet:	rudnick@cse.ogc.edu
Computer Science & Eng. Dept.	UUCP:	{tektronix,verdix}!ogccse!rudnick
Oregon Graduate Center		(503) 690-1121 X7390 (or X7309)
19600 N.W. von Neumann Dr.	
Beaverton, OR. 97006-1999	

--------------------------------

End of Genetic Algorithms Digest
********************************

