
Genetic Algorithms Digest   Thursday, November 5 1992   Volume 6 : Issue 37

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL
 - anonymous ftp archive: FTP.AIC.NRL.NAVY.MIL (Info in /pub/galist/FTP)

Today's Topics:
	- Re (v6n35): Antonisse's extension to Schema Notation (3 messages)
	- LCS workshop
	- Hype alert!! (Danger Will Robinson....)
	- Self referential problems
	- Sannier and Goodman
	- METAGEN project
	- p4: portable programming of parallel machines

----------------------------------------------------------------------
****************************************************************************

CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)

SAB92, From Animals to Animats, Honolulu (v6n6)                 Dec 07-11, 92
ICNN93, IEEE Intl. Conf. on Neural Networks, Calif (v6n24)      Mar 28-01, 93
ECML-93, European Conf. on Machine Learning, Vienna (v6n26)	Apr 05-07, 93
Foundations of Evolutionary Computation WS, Vienna (v6n34)      Apr     8, 93
Intl. Conf. on Neural Networks and GAs, Innsbruck (v6n22)       Apr 13-16, 93
ECAL-93, 2nd European Conference on A-Life, Brussels (v6n31)    May 24-26, 93
ICGA-93, Fifth Intl. Conf. on GAs, Urbana-Champaign (v6n29)     Jul 17-22, 93
COLT93, ACM Conf on Computational Learning Theory, UCSC (v6n34) Jul 26-28, 93

(Send announcements of other activities to GA-List@aic.nrl.navy.mil)

****************************************************************************
------------------------------

From: Nick Radcliffe <njr@epcc.edinburgh.ac.uk>
Date: Fri, 23 Oct 92 12:16:20 BST
Subject: Generalising Schema Analysis

  In v6n35 Pete Angeline summarises a paper by Jim Antonisse presented
  at ICGA-91 and asks:

  >  Does anyone know of any follow-up work on this interpretation of schema
  >  notation?  Has Antonisse followed up this work?  Is there any counter to
  >  this argument?  Thanks a bunch.

  This is an area I have worked in.  I am also (now) aware of work by Michael
  Vose and Gunar Liepins, Alden Wright, Larry Eshelman and Dave Schaffer and
  have heard rumours that James Baker did something also.   I include
  references to all the relevant papers I know in the area below.

  Here is the essence of this line of generalisation of schema analysis.
  A general form of the schema theorem is:

						N
			       ^       _       __    _
			       f(H,t) |        \      |
	<m(H,t+1)>  >=  m(H,t).------ |  1  -  /  D   |
			       -      |        --  iH |
			       f(B,t) |_       i=1   _|


  where
	<m(H,t+1)>	is the expected number of instances of
			(any) schema H at time t+1

	m(H,t)		is the number of members of the population
			which are instances of schema H at time t
	^
	f(H,t)		is the OBSERVED fitness of schema H at time t
			(i.e. the mean fitness of every member of the
			 population at time t which is in H)
	-
	f(B,t)		is the mean fitness of the entire population B
			at time t

	i		is used to index the N genetic operators in use

	D		measures the probability of application and
	 iH		disruptive effect on schema H of genetic
			operator i.

  When written in this form, the "schema" theorem actually applies to
  *any* subset H of the search space provided only that the disruption
  coefficients are computed with respect to this subset H.   This suggests
  that simply counting the number of schemata of which a solution is an
  instance is of limited value.   Vose and Liepins have used this observation
  to seek to determine, given a particular recombination operator, which are
  the (generalised) schemata the operator can manipulate effectively with
  regard to disruption rates.   I have done the converse, arguing that
  selecting a set of generalised schemata which group together solutions
  with related performance is the key, and devising operators and rules
  for generating operators which manipulate *given* sets of generalised
  schemata effectively.   Vose and Liepins term these generalised schemata
  predicates, whereas I call them formae.

  Bibliography

  Jim Antonisse.  A new interpretation of schema notation that overturns the
  binary coding constraint.
  In Proceedings of the Third International Conference on Genetic Algorithms.
  Morgan Kaufmann (San Mateo), 1989.

  Larry J. Eshelman and David J. Schaffer, Real-coded genetic algorithms
  and interval schemata.
  In Darrell Whitley, editor, Foundations of Genetic Algorithms II,
  Morgan Kaufmann (San Mateo, CA), 1992.

  Nicholas J. Radcliffe. Genetic Neural Networks on MIMD Computers.
  PhD thesis, University of Edinburgh, 1990.

  Nicholas J. Radcliffe.  Equivalence class analysis of genetic algorithms.
  Complex Systems, 5(2), pages 183-205, 1991.

  Nicholas J. Radcliffe. Forma analysis and random respectful recombination.
  In Proceedings of the Fourth International Conference on Genetic Algorithms,
  pages 222-229. Morgan Kaufmann (San Mateo), 1991.

  Nicholas J. Radcliffe.  The algebra of genetic algorithms.
  Technical Report EPCC-TR92-11, in preparation, Edinburgh Parallel Computing
  Centre, University of Edinburgh, 1992.

  Nicholas J. Radcliffe. Genetic set recombination. In Darrell Whitley, editor,
  Foundations of Genetic Algorithms II. Morgan Kaufmann, 1992.

  Nicholas J. Radcliffe. Genetic set recombination and its application to neural
  network topology optimisation. Neural Computing and Applications, 1(1), 1992.

  Michael D. Vose and Gunar E. Liepins. Schema disruption.
  In Proceedings of the Fourth International Conference on Genetic Algorithms,
  pages 237-243. Morgan Kaufmann (San Mateo), 1991.

  Michael D. Vose. Generalizing the notion of schema in genetic algorithms.
  Artificial Intelligence, 1991.

  Alden H. Wright, Genetic algorithms for real parameter optimisation,
  In Gregory J. E. Rawlins, editor, Foundations of Genetic Algorithms,
  Morgan Kaufmann (San Mateo, CA), pages 205-220, 1990.


  Nick Radcliffe		Edinburgh Parallel Computing Centre
				University of Edinburgh
  email:  njr@epcc.ed.ac.uk	King's Buildings
  fax:	  +44 31 650 6555	EH9 3JZ
  phone:  +44 31 650 5020	Scotland

------------------------------

From: jvr@micas.inescn.pt (Joao Vasco Ranito)
Date: Mon, 26 Oct 92 14:49:50 GMT
Subject: Re: Antonisse's extension to Schema Notation

  Peter Angeline writes:
  > From: Peter J Angeline <pja@cis.ohio-state.edu>
  > Date: Wed, 30 Sep 92 12:27:49 -0400
  > Subject: Antonisse's extension to Schema Notation
  >
  >  Jim Antonisse has a paper in the 1989 GA conference where he argues that
  >  the interpretation of the don't care symbol "#" used in schema theory is
  >  [...]

  Hey! I must stop reading this list or I will never finish my MSc thesis!
  So, the higher the number of symbols, the higher the number of schemata!
  But that's a bit counter-intuitive (at least for my Holland's biased
  mind... :-)

  Seriously, is that theory actually proven? How do I get a copy of that
  article?  I feel there is something wrong with that idea but I don't know
  exactly what it is... Why consider all possible subsets? I must confess I
  don't understand that: for each gene we have to choose one from N possible
  symbols, not one from each possible subset of possible symbols...

  Somebody help me!

  jvr@micas.inescn.pt      |                                                    
  Joao Vasco Ranito        |          [This page left blank.]
  INESC                    |                                                    
  Largo Mompilher 22       |                                                    
  4000 Porto               |                                                    
  Portugal                 |                                                    
  Tel: +351-2-321006       |                                                    
  Fax: +351-2-318692       |                                                    

------------------------------

From: jima@starbase.MITRE.ORG (Jim Antonisse)
Date: Thu, 29 Oct 92 13:40:42 EST
Subject: Re: Antonisse's extension to Schema Notation

  Pete,

  Yes, I still believe what I wrote in "A New Interpretation of
  Schema Notation...", that the analytic framework used to justify
  "minimal alphabets" is misleading.

  The issue addressed there has to do with the expressivity of the
  representation language that encodes a domain of search.  There are at
  least two critical issues having to do with expressivity. One is the
  symbolic encoding issue discussed in the above paper, i.e., how does
  the choice of domain "atoms", the alphabet in the GA, effect the
  coverage power of a search.  The other is the structural encoding
  issue, i.e., what are legal combinations among those atoms, what are
  legal strings in the alphabet.

  I view the latter issue as more fundamental than symbolic encoding.
  The former addresses whether "101" or "5" should denote five.  The
  latter addresses whether encodings like "0.5E1" can be used, where 
  lots of lexically close strings, like "0,5F1", "0.5EZ", etc., can't.

  There are a number of ways to approach this.  I chose formal grammars
  because they were designed to denode subsets of a population of
  strings in the first place, and we know lots about them.  My first
  whack at a grammar-based approach to the GA appeared in the
  proceedings of the first FOGA workshop.

  I have since had my hands full working in the area of robotic stereo
  vision and have recently become ensnared in some nueral net work.
  However, I hope to put some time on a paper that draws out the
  relation of standard GA representations and Regular Expressions.  If
  all goes well, that'll be ready for the next GA conference.

  Jim

------------------------------

From: Robert Elliott Smith <rob@comec4.mh.ua.edu>
Date: Mon, 26 Oct 92 11:20:33 -0600
Subject: Re: LCS workshop

   On October 6-9, 1992 at Johnson Space Center in Houston Texas, a
   group of researchers met at
   the First International Workshop on Learning Classifier Systems.
   The meeting was an effort to have a small, informed group focus
   on the issues involved in LCSs. 
   The meeting was self-orgnaized: the
   core group was selected by Manuel Valenzuela-Rendon, and myself,
   and that group recomended other invitees. Some invitees were joined
   by student co-authors. This final participant list included:

   Lashon Booker, Mitre Corporation
   Lawrence Davis, TICA Associates
   Lisa Desjarlais, University of New Mexico
   Stephanie Forrest, University of New Mexico
   David Goldberg, University of Illinois at Urbana-Champaign
   Jeff Horn,  University of Illinois at Urbana-Champaign
   Kalyonmoy Deb, University of Illinois at Urbana-Champaign
   Ruth McDonald, University of New Mexico
   Daniele Montanari, TEMA SpA
   Rick Riolo, University of Michigan
   Robert E. Smith, University of Alabama
   Stephen F. Smith, Carnegie-Melon University
   David Greene, Carnegie-Melon University
   Manuel Valenzuela-Rendon, ITESM
   Stewart Wilson, Rowland Institute

   The meeting consisted of presentations from extended abstracts, and
   "brainstorming"-style discussion sessions. 

   The workshop was very productive. 
   We are currently are orgainizing to present the results in a journal
   special issue. 

   Rob Smith

   Robert Elliott Smith
       Department of Engineering of Mechanics
       Room 210 Hardaway Hall
       The University of Alabama
       Box 870278
       Tuscaloosa, Alabama 35487
   <<email>> rob@comec4.mh.ua.edu
   <<phone>> (205) 348-1618
   <<fax>> (205) 348-6419    

------------------------------

From: Robert Elliott Smith <rob@comec4.mh.ua.edu>
Date: Mon, 26 Oct 92 14:15:33 -0600
Subject: Hype alert!! (Danger Will Robinson....)

  "There's no need to worry about whether Evolver can optimize your model--
      it can . . . Because of its genetic algorithm . . ."
		  --- Axcelis "Evolver" advertisement brochure

  Hi Everyone,
   I just got a brochure for "Evolver", and it has me a bit concerned. I'm
  sure we are all aware of how hype can effect a growing field. Take the "AI
  Winter" that has struck the expert systems community.

  Most people in the GA community are responsible, and aware of the
  strengths and weaknesses of GAs.  I'm sure we're all working towards
  increasing the strengths and ferreting out the weaknesses.  However, GAs
  are undoubtably ready to solve some "real-world" problems despite the
  weaknesses that exist.  Commercial applications like "Evolver" are
  inevitable, and desirable. But claims of *universal* effectiveness are
  sure to lead to disapointment.  I worry that such disappointments could
  bring the chilly winds of a "GA Winter" before our favorite algorithm has
  time to blossom.

  This letter is an attempt to open the discussion.  The question is, can we
  as a community do anything to prevent this from happening?  I don't have
  an answer, do you?  Does anyone know the people behind "Evolver"? I'd like
  to hear there comments.  RES

  Robert Elliott Smith
      Department of Engineering of Mechanics
      Room 210 Hardaway Hall
      The University of Alabama
      Box 870278
      Tuscaloosa, Alabama 35487
  <<email>> rob@comec4.mh.ua.edu
  <<phone>> (205) 348-1618
  <<fax>> (205) 348-6419    

------------------------------

From: JVORSTER@dos-lan.cs.up.ac.za
Date: 27 Oct 92 11:27:49 WET-2
Subject: Self referential problems

  I recently tried to solve self-referential problems using a GA. I
  found them in Hofstadter's Godel, Escher and Bach.

  The problems goes much like this:

  In this sentence there are:
     twenty-five a's, seven b's .... and one z.

  AND

  In this sequence of numbers there are
     7 : 1's
     :
     2 : 7's
     :
     1 : 9.

  The second one isn't all that complex and can be solved very quickly
  (even by humans (2 solutions)). However, the first one is much more
  interesting since the complexity is much higher. I thought the GA
  will have serious problems solving it. I started with only one
  mutation operator. And to my surprise it solved the puzzle! I ran it
  about five times and every time it found a solution. This I found
  very odd. The GA always found the solution under 200 generations for
  the second, and under 1000 for the first.

  Did I misunderstand the complexity of the problem?
  Did anybody else study some of these problems?

  Johannes
  JVORSTER@cs.up.ac.za

------------------------------

From: t clark <eeclark@pyramid.swansea.ac.uk>
Date: Wed, 28 Oct 92 11:25:02 GMT
Subject: Sannier and Goodman

  Does anyone have any information regarding the work of Adrian Sannier and
  Erik Goodman in using genetic algorithms and classifier systems in
  distributed environments.  I have the following papers;

	  Genetic Learning Procedures in Distibuted Environments

	  Midgard: A Genetic Approach to Adaptive Load Balancing for
	  Distributed Systems

  but I would be very interested to hear about any further work in this area.

			  Thanks,

				  Tim Clark
				  eeclark@uk.ac.swan.pyr

------------------------------

From: Dragan Cvetkovic <dragan@mpi-sb.mpg.de>
Date: Thu, 29 Oct 92 15:15:43 MET
Subject: METAGEN project

		  Hi!
	  I run recently over article "A Hybrid Genetic Algorithm for a Logic
  Problem" by R. A. Young and A. Reel, presented in ECAI-90, and I would like
  to ask if there are some other applications of Genetic Algorithms for SAT 
  problem? And what is the current status of their METAGEN algorithm?
		  Thanks in advance, Dragan Cvetkovic


       Dragan Cvetkovic,                  | To be or not to be 
       dragan@mpi-sb.mpg.de (or)          | is true.    
       Dragan.Cvetkovic@mpi-sb.mpg.de     |      G. Boole

------------------------------

From: levine@antares.mcs.anl.gov (David Levine)
Date: Mon, 26 Oct 92 14:37:28 CST
Subject: p4: portable programming of parallel machines

  For anyone interested in the p4 system, mentioned in the last GA list in
  the context of parallel GA research, here is a short writeup and ftp info.
  --dave levine

  p4 is a third-generation system for portable programming in C and Fortran
  of parallel machines. Developed at Argonne National Laboratory, the p4
  system provides three basic computational models: monitors for the
  shared-memory model, message-passing for the distributed-memory model, and
  integrated support for combining the two models.  Important features of p4
  include asynchronous communication of large messages, global operations,
  xdr for communication in a heterogeneous network, both master-slave and
  SPMD models for message-passing programs, an optional secure server,
  error/interrupt handling, high-resolution clock support, System V IPC
  support, user-control of message-passing/buffer-management, and optional
  automatic logging of events for instrumentation with Upshot, an
  X-Windows-based trace examination facility.

  p4 is intended to be portable, simple to install and use, and efficient.
  It can be used to program heterogeneous workstation networks, advanced
  parallel supercomputers like the Intel Touchstone DELTA and the Alliant
  Campus HiPPI-based system, and single shared-memory multiprocessors.  It
  has been installed on the following machines: Sequent Symmetry, Encore
  Multimax, Alliant FX/8, FX/800, and FX/2800, CRAY X/MP, Sun, NeXT, DEC,
  Silicon Graphics, and IBM RS6000 workstations, Stardent Titan, BBN GP-1000
  and TC-2000, Intel iPSC/860, Intel Touchstone DELTA, and Alliant Campus.
  p4 is not difficult to port and is installed on most new parallel
  processor systems as they become available.

  p4 is in the public domain.  You can obtain the complete distribution by
  anonymous ftp from info.mcs.anl.gov.  Take the file p4.tar.Z from the
  directory pub/p4.  (Another file, called pub/upshot/upshot.tar.Z, contains
  the X-Windows program Upshot for graphically displaying p4 logs.)

------------------------------
End of Genetic Algorithms Digest
******************************
