Genetic Algorithms Digest    Monday, 27 February 1989    Volume 3 : Issue 6

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- Conference on Emergent Computation
	- C* GA routines for CM
	- Re: 2-Armed Bandits
	- Selecting a rule to delete in a Classifier System
	- Re: Benchmark Learning Problems
	- Chaos and Neural Networks

--------------------------------

Date: Thu, 16 Feb 89 11:11:10 MST
From: steph%cardinal@LANL.GOV (Stephanie Forrest)
Subject: Conference on Emergent Computation


			 EMERGENT COMPUTATION:
				    
	 Self-organizing, Collective, and Cooperative Phenomena
	      in Natural and Artificial Computing Networks
				    
			   May 22 - 26, 1989
			 Los Alamos, New Mexico
	      (sponsored by the U.S. Department of Energy,
		     Applied Mathematical Sciences)


Researchers in many different fields are studying systems in which
global behavior emerges through time as a result of many local
interactions among their constituent parts.  When the local
interactions are nonlinear, the global behavior of the system is
often unpredictable and surprisingly complex even when the
constituents and their rules of interaction are quite simple.

While descriptive models of real physical systems demonstrate how
natural chaos can arise, people create computational systems with the
hope that they will achieve some goal, for example, learning to solve
a problem, building a robust model of an environment, or maintaining a
network of computers.  Other examples of emergent systems are provided
by evolution in the form of living systems.  There are a tremendous
number of massively parallel models that exhibit some form of emergent
behavior.  The conference will focus on questions of when interesting
global behavior can arise and under what architectural constraints,
how it can be recognized and studied, and how it can be exploited.
These questions are relevant to both natural and artificial sytems.

Sessions on the following topics are anticipated: methods of
generalization, natural and artificial learning, self-organizing
systems, the emergence of symbolic structures, and parallel and
distributed computation.

Speakers include: James Bower, Paul Churchland, Jack Cowan, Stuart
Hameroff, William Hamilton, Stevan Harnad, Danny Hillis, John Holland,
Bernardo Huberman, Pentti Kanerva, Stuart Kauffman, James Keeler,
Chris Langton, William Levy, Ralph Linsker, Michael Merzenich, Melanie
Mitchell, Stephen Omohundro, Robert Shapley, Quentin Stout, and Leslie
Valiant.

Organizing Committee:
Chris Barrett, Stephanie Forrest, John George, Alan Lapedes, George
Papcun, and Bryan Travis

The conference program will consist of oral presentations of invited
papers and poster sessions of contributed papers.  A refereed
proceedings including both invited and a limited number of contributed
papers will be published, probably as a special issue of Physica D.
Papers for the proceedings must be submitted not later than one month
after the conference.  Participants who wish to present their research
at the poster session should submit an abstract of the work to be
presented by March 30, 1989.  Papers and abstracts should be submitted
in hard copy to:
	Dr. Stephanie Forrest 
	Center for Nonlinear Studies, MS-B258	
	Los Alamos National Laboratory
	Los Alamos, New Mexico  87545

Limited funding is available to support graduate students attending the
conference.  Interested graduate students should contact Dr. Forrest.

Conference attendance will be limited to 200 participants.
The registration fee of $90 includes the conference reception, banquet,
refreshments during the sessions, and one copy of the conference 
proceedings.  Non-U.S. residents from "sensitive" countries must 
have registered by March 1, 1989 and other non-U.S. residents by
May 1.  

For more information, contact:
	Ms. Marian Martinez 
	CNLS, MS-B258
	Los Alamos National Laboratory
	Los Alamos, N.M.  87545
	(505) 667-1444
	mvm@lanl.gov (arpanet)

Participants wishing to register by email should send the following
information to mvm@lanl.gov:

Name:
Institution/Affiliation:
Full Mailing Address:
Telephone (work/home):
Telex (for non-U.S. participants):
Electronic Mail Address:
Citizenship:
Type of Visa:
Motel Request (one of Los Alamos Inn, Hilltop House):
	Single/Double:
	Check-in Date:
	Check-out Date:
	
	Los Alamos Inn Rates (per night, tax not included):
		Single $42.00
		Double $49.00

	Hilltop House Rates (per night, tax not included,
		includes full breakfast):
		Single $46.00
		Double $54.00

Motel reservations will be held until 6:00 pm on the day of 
scheduled arrival.  Please send us a Master Charge, Visa, or
American Express card number if you wish to have guaranteed
reservations.

--------------------------------

Date:     Sat, 18 Feb 89 19:20 EST
From: <INS_ATGE@JHUVMS.BITNET>
Subject:  C* GA routines for CM

I am looking for genetic algorithm routines written in C* for the
Connection Machine-2.  If you have one which you would like to share,
send me a note (ins_atge@jhuvms BITNET or ins_atge@jhunix).
  -Thomas G. Edwards

--------------------------------

Date: Tue, 21 Feb 89 11:57:25 EST
From: white@cs.rochester.edu
Subject: Selecting a rule to delete in a Classifier System

I am soliciting opinions on methods for rule replacement in classifier systems.
That is, in a classifier system with a limited number of rules, when you
decide to add another rule to the system, you have to find a rule to remove.
What are the "standard" approaches (if any) for choosing the rule to delete?

In general there seem to be three good heuristics for measuring a rules
value to the system:

	1) reenforcement variance/predictability 
	2) generality/applicability
	3) expected utility /strength

By reenforcement variance / predictability, I mean that it is useful
to have rules that consistently obtain the same reenforcement (say
via the bucket brigade) for executing there actions.  The reason this 
is important is that the "strength" of a rule with a low variance in 
its reenforcement is a true estimate of the value of the rule for a
given situation, but the strength of a rule with a high variance in 
its reenforcement really isn't likely to be a good estimate of the value
of executing the rule in a particular situation.  Thus rules with high
a variance in there strength reenforcement are less useful to the system.

By generality/applicability, I mean that a rule whose condition is
satisfied much of the time is (all else being equal) more useful to
the system than a rule with a more specific condition.  
Of course generality and reenforcement variance tend to oppose one another.
Statistically, general rules are more likely to have a large variance in there
reenforcement than more specific rules.

Finally, rules with large "strength" values are more useful to the system 
than those with low strengths.  The reason being that high strength rules
tend to lead the system to better rewards and tend to "fire" more often.

My current algorithm for selecting a rule to delete is very simple.  I 
simple delete the rule that "fire" (that is actually executes) the least 
often.  This algorithm addresses each of the heuristics above since:
	1) my bidding process favors more specific rules over
	   more general rules and therefore tends (??) to execute
	   rules with low variance more often than rules with high
	   variance

	2) more general rules can continue to stay around by being applicable
	   in novel situation (ie situations where specific rules have not
	   yet developed).

	3) my bidding process favors rules with higher strength values
	   and so high strength rules tend to fire more and stay around
	   too.

I'd like to hear other approaches to rule replacement / delete rule selection.
Has anyone formally defined the "value" of a rule to the system and formalized
the optimal rule to replace?  (Clearly the strength of a rule is not
sufficient!)

-Steve Whitehead

--------------------------------

Date: Tue, 21 Feb 89 10:47:10 EST
From: jima@starbase.mitre.org (Jim Antonisse)
Subject: Re: 2-Armed Bandits

Barry McMullin writes:
   I'm interested in [the 2-armed bandit problem] but am finding it
   difficult to grasp some aspects of Holland's analysis.

To which John Grefenstette responds:
   James Baker and I have submitted a paper on this topic to ICGA-89.
   The paper examines, among other things, how well the analogy with
   k-armed bandits describes the behavior of genetic algorithms in practice.

I look forward to your and James Baker's paper, John.  I also had trouble
connecting the k-armed bandit argument in John Holland's 1975 book to the 
allocation of trails arguments.  However, I found the presentation in David
Goldberg's recent book very helpful ("GA's in Search, Optimization, and
Machine Learning", Addison-Wesley, pp36-41).  He makes the connection of
schemata sampling (in theory) to the k-armed bandit problem explicit through
a mapping of the sampling procedure to the simultaneous solution of SETS of
k-armed bandits.  As with the other parts of his presentation of the GA,
it's very nicely done.

Jim Antonisse
antonisse@mitre.org

--------------------------------

Date: 	Fri, 17 Feb 89 22:38:31 EST
From: alberta!shu@uunet.UU.NET
Subject: Re: Benchmark Learning Problems

I would like to join in the discussion on Benchmark Learning Problem. 
It is a bit late because my name was not on the GA-list (though on some
other GA mailing lists).

1.   Knowledge linking which includes building structural knowledge  
     (such as default hierarchies) and establishing dynamic connections
     among knowledge (such as chaining) is an important process of 
     learning. It is also an factor that affects the performance of
     a learning system. 

     Many symbolic concept acquisition and knowledge intensive, 
     domain-specific learning systems use explicit structural
     knowledge representation such as frames, semantic-nets 
     to link knowledge together. Genetic learning paradigm uses 
     much less rich knowledge representation structures.
     It has been noticed that in practice, classifier systems have 
     some difficulties in knowledge linking such as finding and 
     maintaining useful chains.
     So, we should have a task with which we can study on the related issues
     about establishing, maintaining (explicitly or implicitly) 
     knowledge links or knowledge structures. 

2.   We should have a task that can be easily extended. So, we can see
     how experience in a "ten-city salesman problem" can be used in
     a "twelve-city salesman problem".
     
3.   A task that can show how heuristic knowledge can help genetic
     learning.

I would like to see other people's opinions.

Lingyan Shu
Dept. of Computing Science
University of Alberta
Edmonton,
Canada, T6G 2H1

E-mail: shu@alberta.uucp

--------------------------------

Date: Fri, 17 Feb 89 15:03:11 PST
From: schori@MATH.ORST.EDU
Subject: Chaos and Neural Networks

Please put me on the mailing list. I am

Richard M. Schori
Department of Mathematics
Oregon State University
Kidder Hall 368
Corvallis, OR 97331-4605

(503) 754-4686 

E-Mail:  schori@math.orst.edu

Paragraph of Introduction: 
     I am a topologist who has recently been 
studing CHAOS and NEURAL NETWORKS.  I starting looking at chaos in
biological systems and ended up organizing and chairing sessions at 
the Pacific Division AAAS in June, 1988, on "Stability and Chaos in
Neural Network Learning" and at the Annual AAAS in January, 1989, on
"Chaos in Biological Systems: Physiology and Medicine".  (No doubt the
speakers knew more than I.)  I have been studing evolution with the idea
of finding scale and context-independent properties that are prevelant in
nature with the idea a applying them to electronic neural networks.  I
ran into genetic algorithms through this route and now I hope that my
unique background will help in making some contributions to the area.

--------------------------------

End of Genetic Algorithms Digest
********************************

