Genetic Algorithms Digest    Tuesday, 24 May 1988    Volume 2 : Issue 14

 - Send submissions to GA-List@NRL-AIC.ARPA
 - Send administrative requests to GA-List-Request@NRL-AIC.ARPA

Today's Topics:
	- Plans to meet at Machine Learning Conference
	- Request for introductory material
	- GA Research Poll
	- New address for Robertson
	- Abstracts of papers

--------------------------------

Date: Tue, 24 May 88 10:22:12 EDT
From: John Grefenstette <gref@nrl-aic.arpa>
Subject: Participants in ML Conference

Here's a list of GA-List members who have indicated that they will
attend the Machine Learning Conference in Ann Arbor next month.
I've volunteered to lead an informal discussion session on GAs
during the evening of June 13.  Times and places will be announced
at the conference.  All are welcome to participate.

John Grefenstette	NRL
Ken De Jong		NRL
Stewart Wilson		Rowland Institute for Science
Dave Davis		BBN
George Robertson	Xerox PARC
David Sirag		United Technologies Research Center
Dave Goldberg		University of Alabama
Rob Smith		University of Alabama
Manuel Valenzuela	University of Alabama
Rick Riolo		Univ. Michigan
John Holland		Univ. Michigan
Jim Antonisse		Mitre
David West		Industrial Technology Institute
Roxana B. Kamen		NOSC

--------------------------------

Date: 12 May 88 14:36 EST
From: STERRITT%SDEVAX.decnet@ge-crd.arpa
Subject: GA Books & Papers request

Hello,
	I am just starting to find out about Genetic Algorithms, and
would appreciate any references to good, fairly easy to understand books
or articles about GA that are available.  I'm doing 'regular' AI research,
so reasonable levels of difficulty are acceptable.
	thanks,
	chris sterritt
	GE Astro Space Div.
	sterritt%sdevax.decnet@ge-crd.arpa	(on arpanet)

--------------------------------

Date: Mon, 16 May 88 12:44:22 CDT
From: Rebecca Selke <selke@rice.edu>
Subject: Introductory material on GAs

I am interested in looking at some introductory material on genetic algorithms.
Could you give me a pointer in the right direction?  I have been told a
little about them, but I need more information.

Thanks for your help...

Becky Selke
selke@rice.edu

--------------------------------

Date:     Fri, 20 May 88 9:38:21 EDT
From: Philip Resnik <presnik@labs-n.bbn.com>
Subject:  GAs

Please add me to the GA-list.  Although as yet I have only limited
experience with GAs, I'm hoping to learn and do more.  I'm particularly
interested in exploring the possibility of applying GAs (or GA/neural-net
combinations) to problems in natural language processing.

Thanks,

Philip Resnik
presnik@bbn.com

--------------------------------

Date: Thu, 19 May 88 09:58 PDT
From: jan cornish <cornish@russian.spa.symbolics.com>
Subject: Statement of interest

I am new to this field and would like to get some pointers.

The main literature I have at this point is:

(1) L. Davis, "Genetic Algorithms and Simulated Annealing"
(2) J. Holland, "Induction"

The applications I am interested in are 

(1) pallet loading
(2) project scheduling
(3) econometric modeling

More generally, but in support of the above problem domains, I am
interest in neural nets, self-organization and machine learning. 

Prior to my hearing about "Genetic Algorithms", I came upon two related
developments, described below. Can any of you comment on any related work? 

The first was a little microworld (discrete event simulation) hack in
which a population of creatures, internally modeled by simple neural
nets, was evolved by a genetic algorithm, in the Darwinian sense of
natural selection. I was confused at first, thinking that the creatures
learned in the normal neural net sense of having their link weights
adjusted.  Ironically, this would correspond to Lamarckian evolution, if
my high school biology serves me right (Santa Monica, 1963). Instead,
"successful" creatures were mated in a manner which certain link weights
were passed on to their off-spring. (Apart from non-starvation and
non-drowning, "Success" was defined by "exhibits behaviour which
amuses the user").

Secondly, I came upon a book, I don't have it at hand to give you the
exact reference, which discussed a cluster of techniques known as "Group
Method of Data Handling", an exciting generalization of linear
regression. It seems to use a kind of genetic algorithm applied to
polynomials.

Jan Cornish
Symbolics Consulting Group (415)969-0955

[ You might check out the Proceedings of the First/Second International
Conferences on Genetic Algorithms and Their Applications for papers
related to your interests.  Both proceedings are now available from
Lawrence Erlbaum Associates.

The book you have in mind is probably "Self-Organizing Methods
in Modeling: GMDH Type Algorithms" edited by S. J. Farlow, based
on the work of A. G. Ivakhnenko.  This looks like interesting work,
but I'm not sure how it relates to GAs.  Has anybody studied this
method in detail? -- JJG ]

--------------------------------

Date: 23 May 88 09:20:19 +1000 (Mon)
From: Bob Marks <bobm@agsm.unsw.oz>
Subject: GA Research Poll

APPLICATION AREA:  Using the GA to breed hybrid strategies for
	positive-sum, n-person, repeated games.

GENERAL APPROACH:  Following Axelrod (Davis (ed.), 1987, Ch.3),
	I use the Prisoner's Dilemma (PD) as an environment in
	which to consider cooperation v. competition.  In the
	2-person PD with dichotomous choice, translation from
	the chromosome to action is straightforward.  In more
	general environments, the genetic information necessary
	will be greater than the present 70 bits (with a 3-move
	memory in the 2-person repeated PD).

GA TOOL:  off-the-shelf GENESIS 4.5 (Grefenstette 1987)

RESULTS:  I have achieved the breeding of strategies in the
	2-person repeated PD game against some simple decision
	rules.  Next: to scale up the complexity of the environment.
	And then: 3- or 4-person repeated games with continuous
	choice variables (and much more genetic information).

--------------------------------

Date: Thu, 12 May 88 11:21 EDT
From: George Robertson <George@think.com>
Subject: New address

John,
  I will be at the ML Conference.  I'm giving a paper on
Population Size in Classifier Systems.

  You should also know that I have accepted a position at
Xerox PARC, and will be starting with them on May 23.
My new email address is "Robertson.pa@Xerox.com".

  George

--------------------------------

Date: Sun 15 May 88 09:44:01-EDT
From: DDAVIS@g.bbn.com
Subject: GAs at BBN

I'm going to the ML conference in Ann Arbor.  Will give a paper on
a new approach to noise with classifier systems.  There are many
things going on here, with Craig Shaefer, and Gil Syswerda 
(a Michigan graduate who has studied at the Source), and a cast
of others working on GA stuff.  George Robertson and Steve Smith
at Thinking Machines, and Craig and Stewart from Rowland Institute
for Science, plus some of us from BBN, are meeting once a month
to discuss work in progress, and we are backed up.  Rich Sutton
has been sitting in too.  There is a lot that seems ready to happen.
Talk to you in Ann Arbor!

David Davis.

--------------------------------

Date: Mon, 16 May 88 18:02:22 EDT
From: Stewart Wilson <wilson@think.com>
Subject: Abstract

Here are the title and abstract of Rowland Institute Research Memo No. 54r, 
which I just completed.

	Bid Competition and Specificity Reconsidered

   	  Experiments were conducted with respect to two classifier system
	mechanisms: the bid competition and the use of specificity in bidding
	and payments.  The experiments employed a simplified classifier sys-
	tem and so may not accurately reflect the behavior of the standard
	system.  Nevertheless, the results indicated that, in general,
	(1) specificity should not be factored into amounts deducted from a
	classifier's strength, (2) the bid competition does not improve
	performance and does not encourage default hierarchies, and (3)
	default hierarchies will form under a somewhat different algorithm
	than the standard one.


Stewart

--------------------------------

Date: Tue, 17 May 88 16:36:39 EDT
From: Martina Gorges-Schleuter <gorges%zix.gmd.dbp.de@relay.cs.net>
Subject: Abstract

Evolution algorithms in combinatorial optimization,
H.Muehlenbein, M.Gorges-Schleuter, O.Kraemer,
published in Parallel Computing 7 (1988) 65-85.

Abstract: 
In this paper we discuss the dynamics of three different classes of 
evolution algorithms : network algorithms derived from the replicator
equation, Darwinian algorithms and genetic algorithms inheriting genetic
information.
We present a new genetic algorithm which relies on intelligent evolution
of individuals. With this algorithm we have computed the best solution
of a famous traveling salesman problem. (The solution of 51.11 is published
in the Research Report 1988 of GMD ). The algorithm is inherently parallel.
Results are taken from an implementation on an Encore Supermax System.

At the moment I am working on an implementation in Occam for an Transputer
based multiprozessor. The main goal is to establish the role of the parameters
recombination, selection and mutation. This should result in a tool with
adaptivly controlled parameters, usable for the solution of different kinds
of optimization problem.

--------------------------------

Date: Tue, 24 May 88 09:01:20 EDT
From: Lashon Booker <booker@nrl-aic.arpa>
Subject: Abstract of recent paper

I've submitted a paper to Machine Learning that gives
an updated look at ideas from my Ph.D. research. Here
is the abstract.

Lashon
------

    Classifier Systems that Learn Empirical World Models

                          Abstract

Most classifier systems  learn  a  collection  of  stimulus-
response  rules, each of which directly acts on the problem-
solving environment and accrues strength proportional to the
overt reward expected from the behavioral sequences the rule
participates in.  Gofer is an example of a classifier system
that  builds  an  internal  model  of its environment, using
rules to represent objects, goals, and  relationships.   The
model  is  used  to  direct behavior.  Learning is triggered
whenever the model proves to be an inadequate basis for gen-
erating  behavior  in  a  given  situation.  This means that
overt external rewards are not necessarily the only  or  the
most useful source of feedback for inductive change.

--------------------------------

Date: Tue, 24 May 88 09:56:12 EDT
From: John Grefenstette <gref@nrl-aic.arpa>
Subject: Abstract of AAAI paper

I will present a paper on "Credit assignment in genetic learning systems"
at AAAI-88 in August.  A more complete version has been submitted
to the journal Machine Learning.  This work offers a comparison
of the bucket brigade algorithm and an alternative credit assignment
method, and describes one way to use the bucket brigade in the
"Pitt approach" to rule learning.

ABSTRACT:  In rule discovery systems, learning often proceeds
by first assessing the quality of the system's current
rules and then modifying rules based on that assessment.
This paper addresses the credit assignment problem that
arises when long sequences of rules fire between successive
external rewards.  The focus is on the kinds of rule assessment
schemes that have been proposed for rule discovery systems that
use genetic algorithms as the primary rule modification strategy.
Two distinct approaches to rule learning with genetic algorithms
have been previously reported, each approach offering a useful
solution to a different level of the credit assignment problem.
We present a system, called RUDI, that exploits both approaches.
Analytic and experimental results are presented that
support the hypothesis that multiple levels of credit assignment
can improve the performance of rule learning systems based on
genetic algorithms.

--------------------------------

End of Genetic Algorithms Digest
********************************

