
Genetic Algorithms Digest   Tuesday, February 19 1991   Volume 5 : Issue 3

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- ALECSYS - a parallel learning classifier system
	- Report on Function optimization
	- critical comment on the schema theorem
	- Question on Variable length individuals
	- Need info on GAs and Neural Nets
	- A reference on multivariate function?
	- SAB90 Proceedings - Now available!

******************************************************************************

CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)

4th Intl. Conference on Genetic Algorithms (v4n17)          Jul 14-17, 1991
AAAI91 - Deadline for submissions is Jan 30, 1991
Machine Learning Workshop - Deadline for submissions is March 1, 1991
Special Issue of MLJ on Reinforcement Learning - (v5n2)
	   Deadline for submissions is  March 1, 1991

(Send announcements of other activities to GA-List@aic.nrl.navy.mil)

******************************************************************************
------------------------------------------------------------------------------

Date: Mon, 28 Jan 91 16:57 MET
From: DORIGO%IPMEL1.POLIMI.IT@ICNUCEVM.CNUCE.CNR.IT
Subject: ALECSYS - a parallel learning classifier system

     As already announced at Politecnico di Milano - Italy,
     we developed a parallel learning classifier system - called
     ALECSYS - running on a transputer system under
     Express 3.0 and 3L parallel C.
     A short documentation is available together with the program.
     At this moment the program is available only on
     disquettes (3-1/2 high density DOS formatted).
     I distribute this program without charge, on your
     disquettes. We believe the software is properly working, but we will
     appreciate every type of comment and of hints to improve it.
     Unluckily we cannot support it and I must admit that the documentation
     may be not completely satisfactory.
     To receive a copy of Alecsys, please send me two
     disquettes and a letter, repeating your request for the
     program and acknowledging your understanding that
     distribution is for research purposes only.  I will put
     Alecsys on your disquettes and return it, along with
     documentation. Please send me also an envolope with
     your address (the envolope should be of the kind used to
     send magnetic media).

     Marco Dorigo
     Dipartimento di Elettronica
     Politecnico di Milano
     Via Ponzio 34/5
     20133 Milano
     Italia
     Tel.  +39-2-2399-3622
     e-mail:  dorigo@ipmel1.polimi.it

     Here are the title and an abstract of a forthcoming
     tech.report about Alecsys:

     Title: Alecsys: A Parallel Laboratory for Learning Classifier Systems
     Authors: Marco Dorigo & Enrico Sirtori

     Abstract
     A major problem with learning classifier systems is
     how to tackle real world problems. A dis-
     tinctive characteristic of many real world problems is
     that they present a complexity that cannot
     be "user-defined", and which is generally orders of
     magnitude higher than in toy systems. The
     use of more powerful, parallel machines, is a way to
     attack this problem from two sides:
     through an increase in the performance of standard
     algorithms, and by design of a new struc-
     tural organization of the learning system - organization
     that should allow a better control on the
     environmental complexity. In order to explore these
     potentialities we have built a tool, Alecsys,
     that can be used to implement parallel learning systems
     in a modular fashion. In Alecsys
     parallelism is used both to increase the system
     performance, by what we call low-level
     parallelization, and to allow the use of many different
     learning classifier systems si-
     multaneously, by what we call high-level
     parallelization. In the paper we first present the
     system organization and the algorithms used, then we
     report some simulation results and
     finally we give some hints for further work.

----------------------------------

Date: Tue, 29 Jan 91 11:16:05 -0100
From: Heinz Muehlenbein <gmdzi!muehlen>
Subject: Report on Function optimization

    Anyone interested in the report should mail to me ( e-mail) or

    H. Muehlenbein 
    GMD
    P.O 1240
    D-5205 Sankt Augustin 1
    germany

    \title{The Parallel Genetic Algorithm as Function Optimizer}

    \author{H. M"uhlenbein, M. Schomisch,\thanks{Gesellschaft f"ur
    Mathematik und Datenverarbeitung mbH, Postfach 1240, D-5205 Sankt
    Augustin 1, e-mail muehlen@zi.gmd.dbp.de } 
    J. Born \thanks{Institut f"ur Informatik und Rechentechnik, Rudower
    Chaussee 5, D-1199 Berlin-Adlershof, e-mail born@iir-berlin.adw.dbp.de} }

    \begin{document}
    \sloppy 
    \maketitle

    \begin{abstract}
    In this paper, the parallel genetic algorithm PGA is applied to the
    optimization of continuous functions. The PGA uses a mixed strategy.
    Subpopulations try to locate good local minima. If a subpopulation does
    not progress after a number of generations, hill- climbing is done. Good
    local minima of a subpopulation are diffused to neighboring
    subpopulations. Many simulation results are given with popular test
    functions. The PGA is at least as good as other genetic algorithms on
    simple problems. A comparison with mathematical optimization methods is
    done for very large problems. Here a breakthrough can be reported. The
    PGA is able to find the global minimum of Rastrigin's function of
    dimension 400 on a 64 processor system! Furthermore, we give an
    example of a superlinear speedup. 

    \end{abstract}

---------------------------------- 

Date: Tue, 29 Jan 91 13:31:11 -0100
From: Heinz Muehlenbein <gmdzi!muehlen>
Subject: critical comment on the schema theorem

   For a long time, I have observed the importance which many GA 
   researchers attribute to the SCHEMA THEOREM. It seems to be something
   like a holy cow. Time is ripe now for a critical comment.
   There are two aspects of the theorem. The first aspect is the trivial
   estimate of the theorem itself, the second the claim that because of
   this estimate, the PGA allocates the search effort to many regions in
   parallel according to sound ( or even optimal) principles.
   The second claim is not based on sound principles. This has been also 
   observed by Grefenstette and Baker ( ICGA 89 How genetic algorithms
   work). They write :
   " While Holland's analysis of the k-armed bandit problem seems 
   correct, its application to the way genetic algorithms allocate trials
   to hyperplanes is unclear".
   What is the problem? The theorem uses the "observed" performance of
   schemata. No statistical error estimates are provided between the
   "observed" performance and the real performance of the hyperplanes. The
   conclusion " If the "observed performance performance of Hi is
   consistently higher than the observed performance of Hj, Hi grows at an
   exponentially greater rate than Hj" is correct, but it is an an APOSTERIORI
   ESTIMATE only, it cannot be applied a priori! Grefenstette correctly
   observes that the theorem should be only applied to hyperplanes which
   exhibit very little variance in their fitness functions!!
   If we agree on this statement, then we should ask ourselves :" What are
   the best genetic representations to have hyperplanes with little
   variance. 
   The second problem I have with the mainstream interpretation of the
   schema theorem is , that it only estimates the disruption of good
   schemata. It gives no hint, why th GA is able to find new and bettter
   schemata which have not been in the population before.
   A final remark: There is no single best strategy for global
   optimization. The GA uses a certain strategy, which is mainly driven
   by crossing-over and recombination. These operators define 
   the problem space where the GA can be applied with a certain hope. 

--------------------------------------

Date: Wed, 13 Feb 91 10:11:30 pst
From: Kingsley Morse <kingsley@hpwrce.hp.com>
Subject: Question on Variable length individuals

    My genetic algorithm works best when all the chromosomes are the same 
    length.  Has anyone else tried crossing over chromosomes of different
    lengths?  What results did they (you) get?

--------------------------------------

Date: Mon, 18 Feb 91 23:17:07 EST
From: xin@phys4.anu.edu.au (Xin Yao)
Subject: Need info on GAs and Neural Nets

    I am currently collecting papers/reports on the combination of
    evolutionary search procedures (e.g., GAs) with neural networks. I
    shall be very much grateful if you can give me the information
    about this research, especially the latest progress. I've already
    got Weiss' and Rudnick's reports.

    Thank you all.

    Xin Yao
    Computer Sci. Lab.
    Research School of Phys. Sci. & Eng.
    Australian National Univ.
    GPO Box 4, Canberra, ACT 2601
    Australia
    Email: xin@cslab.anu.edu.au

--------------------------------------

Date: Tue, 19 Feb 91 09:24:20 CST
From: AARON KONSTAM <AKONSTAM%TRINITY.BITNET@ricevm1.rice.edu>
Subject: A reference on multivariate function?

    Does any one out there have a reference (or more) to work on the
    optimization of multivariate functions where the authors studyied
    the following question?
    If one represents all the independent variables in a multi-variate
    function by a single binary chromosome string their seems to be
    two ways to do the crossover. Either one does the crossover over
    the whole chromosome string or you can do the crossover separately
    on each variable representation (that is the substring in the chromosome
    that represents one independent variable). I am interested in any work
    comparing the effects or efficiency of these two approaches.

    AARON KONSTAM
    Trinity University
    715 Stadium Dr.
    SAN ANTONIO, TX 78212
    (512)-736-7484
    AKONSTAM@TRINITY.BITNET

--------------------------------------

Date: Wed, 06 Feb 91 13:24:25 EST
From: Stewart Wilson <wilson@Think.COM>
Subject: SAB90 Proceedings

	*   *   *   *  SAB90 Proceedings Published  *   *   *   *  


	The Proceedings of the conference "Simulation of Adaptive
	Behavior: From Animals to Animats", which took place in September 
	in Paris, have been published by MIT Press/Bradford Books.
	This was the first major conference to bring together researchers
	ranging from ethology to robotics in order to further understanding 
	of the behaviors and underlying mechanisms that allow animals and, 
	potentially, robots to adapt and survive in uncertain environments.

	The Proceedings contain 62 papers, divided into sections 
	corresponding to the conference sessions.  The first section,
	The Animat Approach, contains papers on artificial animal
	research as a tool for understanding adaptive behavior and as a 
	new approach to artificial intelligence.  The next sections--
	Perception and Motor Control, Cognitive Maps and Internal World 
	Models, Motivation and Emotion, Action Selection and Behavioral 
	Sequences, Ontology and Learning, Collective Behaviors, and 
	Evolution of Behavior--address these themes from the perspective 
	of adaptive behavior in both animals and animats.  There follows a 
	large section on Architectures, Organizational Principles, and 
	Functional Approaches, containing several strong--and differing--
	theses on how to understand or achieve natural or artificial systems 
	with adaptive behavior.  A final section, Animats in Education, 
	describes novel and uncomplicated robot and simulation technologies 
	designed for teaching and research.

	There are several GA or CS related papers, including work by
	Lashon Booker, Federico Cecconi and Domenico Parisi, Robert Collins 
	and David Jefferson, Yuval Davidor, Renaud Dumeur, Inman Harvey,
	John Koza, Fabio De Luigi and Vittorio Maniezzo, Jan Paredis,
	Alexandre Parodi and Pierre Bonelli, Rick Riolo, Peter Todd and 
	Geoffrey Miller, and Dan Wood.

	In addition, there are papers by Michael Arbib, Randall Beer, 
	Rodney Brooks, Patrick Colgan, Jean-Louis Deneubourg, Alisdair 
	Houston, Long-Ji Lin, Pattie Maes, Gerhard Manteuffel, Maja Mataric, 
	David McFarland, Herbert Roitblat, Uwe Schnepf, Tim Smithers, 
	Luc Steels, Richard Sutton, Frederick Toates, David Waltz, and 
	many others. 

	An outstanding review paper by Jean-Arcady Meyer and Agnes Guillot
	leads off the Proceedings.  In all, the volume gives a comprehensive 
	and up-do-date view of adaptive behavior research world-wide.  
	There is even a paper by me!

	The following gives information about ordering:

	    "From Animals to Animats: Proceedings of The First International
	    Conference on Simulation of Adaptive Behavior", Jean-Arcady Meyer 
	    and Stewart W. Wilson, eds., Cambridge, MA: The MIT Press/Bradford 
	    Books (1991).

	    The book contains 62 papers, is 550 pages long, and costs $55.00.

	    Orders can be placed directly with the publisher:

	    The MIT Press/Bradford Books
	    55 Hayward Street
	    Cambridge, MA  02142

	    phone: (800) 356-0343
	    fax:   (617) 625-6660

	    Payment can be by Mastercard or VISA or signed purchase order.

	    From Europe, please contact the London office:
	    The MIT Press, Ltd
	    126 Buckingham Palace Road
	    London SW1W 9SA
	    phone:  44 717 309208
	    fax:    44 717 308728.

	    From elsewhere in the world, please contact Cambridge.
	    
--------------------------------
End of Genetic Algorithms Digest
********************************
