Genetic Algorithms Digest   Friday, 23 December 1988    Volume 2 : Issue 26

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- Re: Classifier systems problems
	- A Preprint Abstract
	- Recursion
	- Neural Net/GA Bibliography

--------------------------------

From: nvuxh!hall@bellcore.bellcore.com
Date: 18 Dec 1988  19:51 EST
Subject: Re: Classifier systems problems

"Leonard" <XT.A08@Forsythe.Stanford.EDU> writes:

> Mr. Michael R. Hall  <hall%nvuxh.UUCP@bellcore.com> writes:
>
>      "[I would generally advise against Michigan approach
>       classifier systems for program induction as well, but
>       that's another story.]"
>
> Inasmuch as I am working with a Michigan style classifier system
> -- altered to be sure, but still a system that applies genetic
> operators to single production rules rather than to sets of them
> in the manner of either Smith or Grefenstette -- this is a story
> that I would honestly like to hear.  If I'm on the wrong track,
> I want to know sooner rather than later.

Okay.  This has probably been discussed many times before in this
digest, but I was asked, so I will commence classifier bashing...

First, let's make sure we all speak the same language:

Michigan Approach    
- Genetic systems in which the population is a set of rules.
  Individual rules are evaluated via the bucket brigade algorithm,
  receiving feedback from the environment in the form of "payoff."
  The whole population functions essentially as one production system. 
  This approach is synonymous with "classifier systems."  I use
  "rules" interchangeably with "classifiers."

Pittsburgh Approach  
- Genetic systems in which the population is a set of rule sets.
  Each individual, then, functions as a production system, and is
  evaluated by applying it for a short length of time to some
  problem and rating its performance.

I should emphasize that classifier systems can learn from continuous
streams of payoff, while Pittsburgh approach systems are designed to
solve problems to a completion point and then receive feedback. 
Thus, the Michigan approach is more general than the Pittsburgh
approach.  For problems solvable by both, I advocate the Pittsburgh
approach.  For problems solvable by the Michigan approach but not
the Pittsburgh approach, I would generally advocate neural nets. 

The Michigan approach was developed in the late 70's at the
University of Michigan, when Holland diverged a bit from his
philosophy of "Adaptation in Natural and Artificial Systems."  This
divergence was motivated by a desire to model cognitive processes
and the result was classifier systems with the bucket-brigade
algorithm.  Meanwhile at the University of Pittsburgh, DeJong and
Smith developed the Pittsburgh approach, staying on firm
evolutionary ground.  

There are several differences between these two approaches, but I
will focus on only one to keep this as short as possible.
The primary difference is that the Michigan approach has heavy
interactions among individuals, while the Pittsburgh approach
evaluates individuals separately. Hence, a credit assignment problem
exists in Michigan approach, but not in Pittsburgh approach. The
Michigan approach is like adaptations of neural nets and capitalist
economies.  The Pittsburgh approach fits the evolutionary paradigm
more closely, although I do not deny that *some* interactions occur
among individuals of a species during natural evolution. 

One could imagine a spectrum of program evolution systems, running
from systems with totally interdependent individuals to systems with
totally independent individuals (from Michigan to Pittsburgh, if you
will.)  There have been some forays into the middle of this
spectrum (e.g. Grefenstette's paper "Multilevel Credit Assignment in
a Genetic Learning System" from the second proceedings - he
used the bucket-brigade solely to cluster dependent rules in an
otherwise Pittsburgh approach GA), but for the most part, it has
been black and white, Michigan and Pittsburgh.

The bucket-brigade of the Michigan approach is the source of my poor
opinion of classifier systems.  I have little faith in it, because
the method has produced no successful applications (though many
"successful" applications have been referenced).  Furthermore, the bucket
brigade is painfully slow, since feedback can trickle back only one
classifier per iteration in a whole chain of classifiers. The
bucket-brigade has, however, produced many papers proposing ad hoc
solutions to the many internal problems caused by this method (e.g.
Holland's paper in the second proceedings, which is filled with
speculations about ad hoc modifications despite the Michigan
approach being a decade old.) 

Perhaps the most commonly referenced "success" story of classifier
systems is David Goldberg's 1983 thesis on a gas pipeline
controller.  I have great respect for Goldberg's later work, however
I would not label this gas pipeline controller as a success story. 
As far as I can tell by reading his thesis, the rules induced by his
system were one-step independent rules.  In other words, the bucket
brigade failed to produce useful cooperation among rules.  Other
"success" stories are similar. If someone can show me a useful
classifier system which has induced long chains of rules, I will
be surprised (and yes, I know of papers implying that it is possible
in theory - I want empirical evidence.)

Since this post is already long, I will save the rest for next time,
and leave you the following, which will be the subject of the next
post:

Occasionally through random mutations and crossover, classifiers are
produced capable of outputting a message that triggers another
classifier.  This is a necessary feature of classifier systems, but
it has an unfortunate consequence:

  MURPHY'S LAW OF CLASSIFIER SYSTEMS:  Harmful classifiers are produced 
  and "linked" to high strength classifiers more often than beneficial
  classifiers are produced and "linked" to high strength classifiers.

Comments and criticisms are welcome, especially if you have actually
developed a useful classifier system which can induce long chains of rules.

Michael R. Hall (hall%nvuxh.UUCP@bellcore.COM  OR  bellcore!nvuxh!hall)

--------------------------------

From: M Norman <mgn%ITSPNA.ED.AC.UK@CUNYVM.CUNY.EDU>
Date:       Wed, 21 Dec 88 02:20:19 GMT
Subject:    A preprint wot I wrote

The following preprint is available from both the Department of
Physics, and the Edinburgh Concurrent Supercomputer Project at the
address below.  The paper discusses the problem of connecting a number
of processors such as the Inmos Transputer, shows that GAs are good at
it, and describes a multiprocessor GA implemetation based upon
protected subpopulations.  The paper has actually been hanging around
for a while because I've been away from Edinburgh, but subsequent
results have been even more encouraging!!!

A Genetic Approach to Topology Optimisation for Multiprocessor
Architectures

(Edinburgh Preprint 88/451)

Michael Norman
Department of Physics, University of Edinburgh,
Mayfield Road, Edinburgh.  EH9 3JZ. Scotland.

    mgn@uk.ac.ed.itspna            (Janet)
    mgn%itspna.ed@nss.cs.ucl.ac.uk        (Arpa)
    mgn%itspna.ed@UKACRL.BITNET        (bitnet)


Abstract

With the development of multiprocessor architectures which allow a
variable topology, in addition to the problem of optimising the
mapping of the program onto the hardware, there is also the problem of
finding the best connection topology for the processor hardware onto
which the program is being mapped.

This problem may be specified in terms of an optimisation, where the
quality of a topology is determined by the speed it runs a program, or
alternatively modelled by some objective function.  The optimisation
must incorporate problem specific knowledge so that solutions
generated are valid in terms of the connectivity of the processor
hardware.

This paper describes a  genetic approach to optimising the
topology of a multiprocessor architecture, and shows the results of
applying the technique to optimising the topology for two classes of
program.

Mike

--------------------------------

From: Mark Hughes <mcvax!camcon.co.uk!mrh@uunet.UU.NET>
Date: Tue, 20 Dec 88 15:41:57 GMT
Subject: Recursion

Is anyone applying genetic techniques to the development of GAs themselves?
(If so, please tell us about it).

Boggle!
-------------------  <mrh@camcon.co.uk>  or  <...!mcvax!ukc!idec!camcon!mrh>
|   Mark Hughes   |  Telex:   265871 (MONREF G) quoting: MAG70076
|(Compware . CCL) |  BT Gold: 72:MAG70076
-------------------  Teleph:  Cambridge (UK) (0)223-358855


[ I once applied a GA to the problem of finding high-performance
control parameters (population size, crossover rate, etc.) for
GAs. See "Optimization of control parameters for genetic algorithms",
J. Grefenstette, IEEE Trans. Sys. Man & Cybernetics, Jan. 1986.
More recently, Dave Schaffer incorporated crossover maps into
structures undergoing adaptation, thus allowing the GA to influence
its own crossover rate.  And of course Holland's broadcast language in
Ch. 8 of ANAS was meant to provide various kinds of punctuation
marks that would allow the adaptive operators to adapt themselves.
-- JJG ]

--------------------------------

Date: Mon, 19 Dec 88 14:35:28 PST
From: Mike Rudnick <rudnick%cse.ogc.edu@RELAY.CS.NET>
Subject: Neural Net/GA Bibliography

This is a bibliography of work relating artificial neural networks
(ANNs) and genetic search.  The Unix bib format is used throughout.
It is organized/oriented for someone familiar with the ANN literature
but unfamiliar with the genetic search literature.

The first section is a bibliography of work applying or using genetic
search for artificial neural networks, including some artificial life
references.

The second section is a rather idiosyncratic bibliography of the
genetic search literature.  It reflects both my personal bias/interest
and my relative ignorance of the field.

I am a phd candidate in computer science at Oregon Graduate Center.
My research interest is in using genetic search to tackle ANN scaling
issues.  My particular orientation is to view minimizing
interconnections as a central issue, partly motivated by VLSI
implementation issues.

I am starting a mailing list for those interested in applying genetic
search to/with/for ANNs.  Mail a request to
Neuro-evolution@cse.ogc.edu to have your name added to the list.  Mail
to Neuro-evolution-request@cse.ogc.edu for administrivia and
questions.

Much thanks to the people who helped provide this bibliography by
providing references.   Corrections and additional references are
welcome.

--------------------------------------------------------------------------
Mike Rudnick			CSnet:	rudnick@cse.ogc.edu
Computer Science & Eng. Dept.	ARPAnet:  rudnick%cse.ogc.edu@relay.cs.net
Oregon Graduate Center		BITNET:  rudnick%cse.ogc.edu@relay.cs.net
19600 N.W. von Neumann Dr.	UUCP:	{tektronix,verdix}!ogccse!rudnick
Beaverton, OR. 97006-1999	(503) 690-1121 X7390
These opinions are my own.
--------------------------------------------------------------------------


**************************** neuro-evolution references *********************

%A David Ackley
%D 1987
%I Kluwer Academic Publishers
%T A Connectionist Machine for Genetic Hillclimbing

%A Aviv Bergman
%A Michel Kerszberg
%B Proceeding of the IEEE First Annual Conference on Neural Networks
%C San Diego
%D 1987
%T Breeding Intelligent Automata

%A A. K. Dewdney
%D November 1985
%J Scientific American
%P 21-32
%T Computer Recreations:  Exploring the field of genetic algorithms in a primordial computer sea full of flibs

%A W. B. Dress
%A J. R. Knisley
%B 1987 IEEE Conference on Systems, Man, and Cybernetics (preprint)
%D October 1987
%T A Darwinian Approach to Aritificial Neural Systems

%A W. B. Dress
%B Proceeding of the IEEE First Annual International Conference on Neural Networks
%D June 1987
%T Darwinian Optimization of Synthetic Neural Systems

%A A. Gierer
%D 1988
%J Biological Cybernetics
%P 13-21
%T Spatial organization and genetic information in brain development
%V 59

%A H. M. Hastings
%A S. Waner
%D 1984
%T Principles of evolutionary learning:  design for a stochastic neural network

%A Harold M. Hastings
%A Stefan Waner
%D January 1986
%J SIGART Newsletter
%N 95
%P 29-31
%T Biologically Motivated Machine Intelligence

%A Geoffrey E. Hinton
%A Steven J. Nolan
%R CMU-CS-86-128
%T How Learning Can Guide Evolution

%J Electronic Engineering Times
%T Artificial Life:  Electronics Frontier

%A R. Colin Johnson
%D January 4, 1988
%J Electronic Engineering Times
%T Avoiding the AI Trap:  Synthetic Intelligence

%A Michel Kerszberg
%B Snowbird 1988
%D 1988
%T Genetics and epigenetics of neural wiring

%A Michel Kerszberg
%I Institut fur Festkorperforschung der
%T Genetic and Epigenetic Factors in Neural Circuit Wiring (preliminary)

%A Michel Kerszberg
%A Aviv Bergman
%B Computer Simulation in Brain Science, Copenhagen, Denmark
%D August 1986
%T The Evolution of Data Processing Abilities in Competing Automata

%A E. Mjolsness
%A D. H. Sharp
%D 1986
%J Proc. American Institute of Physics (Special Issue on Neural Nets)
%T A preliminary analysis of recursively generated networks

%A Eric Mjolsness
%A David H. Sharp
%A Bradley K. Alpert
%D March 1988
%R YALEU/DCS/TR-613, Yale
%R LA-UR-88-142, Los Alamos
%T Scaling, Machine Learning, and Genetic Neural Nets

%A Eric Mjolsness
%A Gene Gindi
%A Tony Zador
%A P. Anandan
%B Snowbird 1988
%T Objective Functions for Visual Recognition:  A Neural Network that Incorporates Inheritance and Abstraction

%A Eric Mjolsness
%A David H. Sharp
%A Bradley K. Alpert
%B Snowbird 1988
%T Genetic Parsimony in Neural Nets

%A Gerard Rinkus
%I Dept. of Math & Computer Science, Adelphi University
%T Learning as Natural Selection in a Sensori-Motor Being

%A Rod Rinkus
%D 1986
%I unpublished masters thesis, Hofstra University
%T Learning and Pattern Recognition in Sensori-Motor Beings

%A John Maynard Smith
%D 29 October 1987
%J Nature
%P 761-762
%T When learning guides evolution
%V 329

%A Peter Todd
%D 1988
%I Psychology Department, Stanford University
%T Evolutionary methods for connectionist architectures

%A S. Waner
%A H. M. Hastings
%D 1985
%T Evolutionary learning of complex modes of information processing

%A Darrell Whitley
%D 1988
%I Computer Science Dept., Colorado State University
%T Applying Genetic Algorithms to Neural Network Problems:  A Preliminary Report

%A Darrell Whitley
%A Joan Kauth
%B Proceedings of the 1988 Rocky Mountain Conference on Artificial Intelligence
%I Dept. of Computer Science, Colorado State Unitversity
%R CS-88-101
%T Genitor:  A Different Genetic Algorithm

%A S. W. Wilson
%C Pittsburgh, PA
%D 1985
%J Proc. of an Intl. Conf. on Genetic Algorithms and Their Applications
%T Knowledge growth in an artificial animal

%A S. W. Wilson
%C Cambridge, MA
%D 1987
%J Proc. Second Intl. Conf. on Genetic Algorithms and Their Applications
%T Genetic algorithms and biological development


******************* general genetic algorithm references ********************

%A T. J. A. Bennett
%D 1988
%J Cybernetics and Systems:  An International Journal
%P 61-81
%T Self-Organizing Systems and Transformational-Generative (TG) Grammar
%V 19

%A R. M. Brady
%D October 1985
%J Nature
%T Optimization strategies gleaned from biological evolution
%V 317

%A P. A. Cariani
%D no date
%I Deptartment of System Science, Watson School of Engineering, Applied Sciences, and Technology, State University of New York at Bighamton, NY, 13901
%T Structural Preconditions for Open-Ended Learning through Machine Evolution

%D 1986
%E John L. Casti
%E Anders Karlqvist
%I Springer-Verlag
%T Complexity, Language, and Life:  Mathematical Approaches

%A J. P. Cohoon
%A S. U. Hegde
%A W. N. Martin
%A D. Richards
%B Proceeding of the 1987 International Conference on Genetic Algorithms and Their Applications
%D 1986
%P 148-154
%T Punctuated Equilibria:  A Parallel Genetic Algorithm

%A J. P. Cohoon
%A S. U. Hegde
%A W. N. Martin
%A D. Richards
%B to appear in 1988 IEEE International Conference on Computer-Aided Design
%D 1988
%T Floorplan Design using Distributed Genetic Algorithms

%D 1987
%E L. Davis
%I Pitman:  London
%T Genetic Algorithms and Simulated Annealing

%D 1985
%E John Grefenstette
%I Lawrence Erlbaum Assoc.
%T Proceedings of the First International Conference on Genetic Algorithms and Their Applications

%D 1987
%E John Grefenstette
%I Lawrence Erlbaum Assoc.
%T Genetic Algorithms and Their Applications:  Proceedings of the 2nd Intl. Conf. Genetic Algorithms

%A Kenneth De Jong
%B Genetic Algorithms and Their Applications:  Proceeding of the Second International Conference on Genetic Algorithms
%D 1987
%E John J. Grefenstette
%I Lawrence Erlbaum Associates
%T On Using Genetic Algorithms to Search Program Spaces

%A Elredge
%A Gould
%T Time Frames
%X Theory of punctuated equilibria

%A A. J. Fenanzo, Jr.
%D July 1986
%J SIGART Newsletter
%N 97
%P 22
%T Darwinian Evolution as a Paradigm for AI Research

%A Jean-Charles Gille
%A Stefan Wegrzyn
%A Pierre Vidal
%D 1988
%J Int. J. Systems Sci
%N 6
%P 845-855
%T On some models for developmental systems, Part IX:  Generalized generating word and and genetic code
%V 19

%A John J. Grefenstette
%D January/February 1986
%N 1
%P 122-128
%T Optimization of Control Parameters for Genetic Algorithms
%V SMC-16

%D 1985
%E John J. Grefenstette
%I Lawrence Erlbaum Associates
%T Genetic Algorithms and Their Applications:  Proceedings of the First International Conference on Genetic Algorithms

%C Cambridge, MA
%D July 28-31, 1987
%E John J. Grefenstette
%I Lawrence Erlbaum Associates
%T Genetic Algorithms and Their Applications:  Proceedings of the Second International Conference on Genetic Algorithms

%A David Goldberg
%I Addison-Wesley
%T Genetic Algorithms in Optimization and Machine Learning

%A Stephen Jay Gould
%D April 1982
%J Science
%N 23
%P 380
%T Darwinism and the Expansion of Evolutionary Theory
%V 216
%X Talks about issues in biological evolution.

%A John H. Holland
%B Adaptive Control of Ill Defined Systems
%D 1984
%E Oliver G. Selfridge and others
%I Plenum Press
%T Genetic Algorithms and Adaptation

%A J. H. Holland
%D 1975
%I U. Michiagan Press
%T Adaptation in Natural and Artificial Systems

%A J. H. Holland
%A Holyoak
%A Nisbett
%A Thagard
%D 1986
%I MIT Press:  Cambridge, MA
%T Induction:  Processes of Inference, Learning, and Discovery

%A H. Muhlenbein
%A M. Gorges-Schleuter
%A O. Kramer
%D April 1988
%J Parallel Computing
%N 1
%P 65-85
%T Evolution algorithms in combinatorial optimization
%V 7

%A George G. Robertson
%B Genetic Algorithms and Thier Applications:  Proceedings of the Second International Conference on Genetic Algorithms
%D July 28-31, 1987
%E John J. Grefenstette
%T Parallel Implementations of Genetic Algorithms in a Classifier System

--------------------------------

End of Genetic Algorithms Digest
********************************

