
Genetic Algorithms Digest   Friday, December 27 1991   Volume 5 : Issue 41

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- 1990 NCARAI Technical Reports Available

**********************************************************************

CALENDAR OF GA-RELATED ACTIVITIES: (with GA-List issue reference)

 Canadian AI Conference, Vancouver, (CFP 1/7)                 May 11-15, 1992
 COGANN, Combinations of GAs and NNs, @ IJCNN-92 (v5n31)      Jun 6,     1992
 10th National Conference on AI, San Jose, (CFP 1/15)         Jul 12-17, 1992
 FOGA-92, Foundations of Genetic Algorithms, Colorado (v5n32) Jul 26-29, 1992
 COG SCI 92, Cognitive Science Conference, Indiana, (v5n39)   Jul 29-1,  1992
 ECAI 92, 10th European Conference on AI (v5n13)              Aug  3-7,  1992
 Parallel Problem Solving from Nature, Brussels, (v5n29)      Sep 28-30, 1992

 (Send announcements of other activities to GA-List@aic.nrl.navy.mil)

**********************************************************************
----------------------------------------------------------------------

From: schultz@AIC.NRL.Navy.Mil
Date: Wed, 6 Nov 91 13:04:12 EST
Subject: 1990 NCARAI Technical Reports Available

   Here is a list of available technical reports from the Machine Learning
   Group of the Navy Center for Applied Research in Artificial Intelligence.
   This list covers 1990.  Another list covering 1991 will be sent out in
   January. 
   -- Alan C. Schultz


   Title: An investigation into the use of hypermutation as an adaptive
	  operator in genetic algorithms having continuous, time-dependent
	  nonstationary environments
   Author(s): Helen G. Cobb
   E-mail Address: cobb@aic.nrl.navy.mil
   Technical Report citation: NRL Memorandum Report 6760, December 11, 1990
   AIC Report No.: AIC-90-001 

   Abstract
	   Previous studies of Genetic Algorithm (GA) optimization in
   nonstationary environments focus on discontinuous, Markovian switching
   environments. This study introduces the problem of GA optimization in
   continuous, nonstationary environments where the state of the environment
   is a function of time.  The objective of the GA in such an environment is
   to select a sequence of values over time that minimize, or maximize, the
   time-average of the environmental evaluations.  In this preliminary study,
   we explore the use of mutation as a control strategy for having the GA
   increase or maintain the time-averaged best-of-generation performance.
   Given this context, the paper presents a set of short experiments using a
   simple, unimodal function.  Each generation, the domain value mapping into
   the optimum changes so that the movement follows a sinusoidal path.  In
   one of the experiments, we demonstrate the use of a simple adaptive
   mutation operator.  During periods where the time-averaged best
   performance of the GA worsens, the GA enters hypermutation (a large
   increase in mutation); otherwise, the GA maintains a low level of
   mutation.  This adaptive mutation control strategy effectively permits the
   GA to accommodate changes in the environment, while also permitting the GA
   to perform global optimization during periods of environmental
   stationarity.

   ==========

   Title: Genetic-Algorithm-Based Learning
   Author(s):  Kenneth A. De Jong 
   E-mail Address: dejong@aic.nrl.navy.mil
   Book Chapter:  Machine Learning Vol. III, Y. Kodratoff and
		  R. Michalski (eds.), Chapter 21, pp. 611-638, 1990,
		  Morgan-Kaufmann 
   AIC Report No.: AIC-90-002

   Abstract
	   This chapter describes a subarea of machine learning that is
   actively exploring the use of genetic algorithms as the key element in the
   design of robust learning strategies. After characterizing the kinds of
   learning problems motivating this approach, a brief overview of genetic
   algorithms is presented. Three major approaches to using genetic
   algorithms for machine learning are described, and an example of their use
   in learning entire task programs is given. Finally, an assessment of the
   strengths and weaknesses of this approach to machine learning is provided.

   ==========

   Title: An Analysis of the Interacting Roles of Population Size and
	  Crossover in Genetic Algorithms
   Author(s):  Kenneth A. De Jong and William M. Spears 
   E-mail Address: dejong@aic.nrl.navy.mil, spears@aic.nrl.navy.mil
   Conference citation:  First International Conference on Parallel
			 Problem Solving from Nature, October 1-3, 1990,
			 Dortmund, Germany, IEEE Society Press
   AIC Report No.: AIC-90-003

   Abstract
	   In this paper we present some theoretical and empirical results on
   the interacting roles of population size and crossover in genetic
   algorithms. We summarize recent theoretical results on the disruptive
   effect of two forms of multi-point crossover: n-point crossover and
   uniform crossover. We then show empirically that disruption analysis alone
   is not sufficient for selecting appropriate forms of crossover.  However,
   by taking into account the interacting effects of population size and
   crossover, a general picture begins to emerge. The implications of these
   results on implementation issues and performance are discussed, and
   several directions for further research are suggested.

   ==========

   Title: Active bias adjustment for incremental, supervised concept learning
   Author(s): Diana F. Gordon
   E-mail Address: gordon@aic.nrl.navy.mil
   Technical report citation: CS-TR-2464, UMIACS-TR-90-60, May 1990,
			       University of Maryland
   AIC Report No.: AIC-90-004

   Abstract
	   This paper describes a new method for improving the performance of
   systems that learn concepts from examples.  This method judiciously
   selects a language for expressing hypotheses, which are estimates of the
   concept being learned (i.e., the target concept).  Experiments, described
   in this paper, demonstrate that the use of this method can lead to a
   significant improvement in the rate of convergence to the target concept.

   ==========

   Title: Explanations of empirically derived reactive plans
   Author(s): Diana F. Gordon and John J. Grefenstette
   E-mail Address: gordon@aic.nrl.navy.mil, gref@aic.nrl.navy.mil
   Conference citation: Proceedings of the Seventh International Conference
			on Machine Learning, pp198-203, June 21-23, 1990,
			Morgan Kaufmann: Austin, TX
   AIC Report No.: AIC-90-005 

   Abstract
	   Given an adequate simulation model of the task environment and
   payoff function that measures the quality of partially successful plans,
   competition-based heuristics such as genetic algorithms can develop high
   performance reactive rules for interesting sequential decision tasks. We
   have previously described an implemented system, called SAMUEL, for
   learning reactive plans and have shown that the system can successfully
   learn rules for a laboratory scale tactical problem.  In this paper, we
   describe a method for deriving explanations to justify the success of such
   empirically derived rule sets.  The method consists of inferring plausible
   subgoals and then explaining how the reactive rules trigger a sequence of
   actions (i.e., a strategy) to satisfy the subgoals.

   ==========

   Title: Genetic algorithms and their applications
   Author(s): John J. Grefenstette
   E-mail Address: gref@aic.nrl.navy.mil
   Book citation:  The Encyclopedia of Computer Science and Technology,
		   Vol. 21,  A. Kent and J. G. Williams (eds.),1990,
		   New York: Marcel Dekker 
   AIC Report No.: AIC-90-006

   Abstract
	   Genetic algorithms (GA's) are adaptive search techniques based on
   principles derived from natural population genetics.  These algorithms
   have been used successfully in a variety of problems that require
   efficient heuristic search.  This articles presents an overview of GA's, a
   discussion of the theoretical foundations and a review of recent
   applications.

   ==========

   Title: Strategy acquisition with genetic algorithms
   Author(s): John J. Grefenstette
   E-mail Address: gref@aic.nrl.navy.mil
   Book citation:  Handbook of Genetic Algorithms, L. Davis (ed.),
		   Chapter 14, pp186-201, 1991, Van Nostrand Reinhold: Boston 
   AIC Report No.: AIC-90-007

   Abstract
	   The growing interest in genetic algorithms can largely be
   attributed to the generality of the approach.  Genetic algorithms can be
   used for both numerical parameters optimization and combinatorial search.
   This chapter shows an application to a rather different sort of problem:
   the optimization of policies for sequential decision tasks.  In this
   approach, each policy, or strategy is represented as a set of
   condition/action rules.  Each proposed strategy is evaluated on a
   simulation model of the sequential decision task, and a genetic algorithm
   is used to search for high-performance strategies.  The approach has been
   implemented in a system called SAMUEL.  This brief chapter should give the
   reader an idea of how genetic algorithms can be used to optimize
   strategies for this broad class of problems.

   ==========

   Title: Competition-based learning for reactive systems
   Author(s): John J. Grefenstette
   E-mail Address: gref@aic.nrl.navy.mil
   Conference citation:  1990 DARPA Workshop on Innovative Approaches to
			 Planning, Scheduling and Control, November 1990,
			 pp348-353, Morgan Kaufmann: San Diego, CA
   AIC Report No.: AIC-90-008

   Abstract
	   Traditional AI planning methods often assume a well-modeled,
   predictable world.  Such assumptions usually preclude the use of these
   methods in adversarial, multi-agent domains.  This paper describes our
   investigation of machine learning methods to learn reactive plans for such
   domains, given access to simulation model.  Particular emphasis is given
   to the task of assessing the effects of differences between the simulation
   model and the environment in which the learned plans will ultimately be
   tested.  Methods for utilizing existing partial plans are also discussed.

   ==========

   Title: Conditions for Implicit Parallelism
   Author(s):  John J. Grefenstette
   E-mail Address:  gref@aic.nrl.navy.mil
   Conference citation:  Proceedings of the 1990 Workshop on Foundations
			 of Genetic Algorithms, Morgan Kaufmann
   AIC Report No.: AIC-90-009

   Abstract
	   Many interesting varieties of genetic algorithms have been
   designed and implemented in the last fifteen years.  One way to improve
   our understanding of genetic algorithms is to identify properties that are
   invariant across these seemingly different versions.  This paper focuses
   on invariants across these genetic algorithms that differ along two
   dimensions: (1) the way user-defined objective function is mapped to a
   fitness measure, and (2) the way the fitness measure is used to assign
   offspring to parents.  A gentic algorithm is called admissible if it meets
   what seem to be the weakest reasonable requirements along these
   dimensions.  It is shown that any admissible genetic algorithm exhibits a
   form of implicit parallelism.

   ==========

   Title: Learning sequential decision rules using simulation models
	  and competition
   Author(s): John J. Grefenstette, Connie L. Ramsey and Alan C. Schultz
   E-mail Address: gref@aic.nrl.navy.mil, ramsey@aic.nrl.navy.mil,
		   schultz@aic.nrl.navy.mil
   Journal citation:  Machine Learning Vol. 5, No. 4, pp.355-381,
		      October 1990, Kluwer Academic Publishers
   AIC Report No.: AIC-90-010

   Abstract
	   The problem of learning decision rules for sequential tasks is
   addressed, focusing on the problem of learning tactical decision rules
   from a simple flight simulator.  The learning method relies on the notion
   of competition and employs genetic algorithms to search the space of
   decision policies.  Several experiments are presented that address issues
   arising from differences between the simulation model on which learning
   occurs and the target environment on which the decision rules are
   ultimately tested.

   ==========

   Title: Simulation-assisted learning by competition:  Effects of noise
	  differences between training model and target environment
   Author(s): Connie Loggia Ramsey, Alan C. Schultz and John J. Grefenstette
   E-mail Address: ramsey@aic.nrl.navy.mil, schultz@aic.nrl.navy.mil,
		   gref@aic.nrl.navy.mil
   Conference citation:  Proceedings of the Seventh International Conference
			 on Machine Learning,  pp.211-215 June 21-23, 1990,
			 Morgan Kaufmann: Austin, TX, 
   AIC Report No.: AIC-90-011

   Abstract
	   The problem of learning decision rules for sequential tasks is
   addressed, focusing on the problem of learning tactical plans from a
   simple flight simulator where a plane must avoid a missile.  The learning
   method relies on the notion of competition and employs genetic algorithms
   to search the space of decision policies.  Experiments are presented that
   address issues arising from differences between the simulation model on
   which learning occurs and the target environment on which the decision
   rules are ultimately tested.  Specifically, either the model or the target
   environment may contain noise.  These experiments examine the effect of
   learning tactical plans without noise and then testing the plans in a
   noisy environment, and the effect of learning plans in a noisy simulator
   and then testing the plans in a noise-free environment.  Empirical results
   show that, while best results are obtained when the training model closely
   matches the target environment, using a training en!  vironment that is
   more noisy than the target environment is better than using using a
   training environment that has less noise than the target environment.

   ==========

   Title: Improving tactical plans with genetic algorithms
   Author(s): Alan C. Schultz and John J. Grefenstette
   E-mail Address: schultz@aic.nrl.navy.mil, gref@aic.nrl.navy.mil
   Conference citation:  Proceedings of IEEE Conference on Tools for
			 Artificial Intelligence TAI '90, pp328-334,
			 November 6-9, 1990, Herndon, VA, IEEE Society Press
   AIC Report No.: AIC-90-012

   Abstract
	   The problem of learning decision rules for sequential tasks is
   addressed, focusing on the problem of learning tactical plans from a
   simple flight simulator where a plane must avoid a missile.  The learning
   method relies on the notion of competition and employs genetic algorithms
   to search the space of decision policies.  In the research presented here,
   the use of available heuristic domain knowledge to initialize the
   population to produce better plans is investigated.

   ==========

   Title: Using neural networks and genetic algorithms as heuristics
	  for NP-complete problems
   Author(s): William M. Spears and Kenneth A. De Jong
   E-mail Address: spears@aic.nrl.navy.mil, dejong@aic.nrl.navy.mil
   Conference citation:  International Joint Conference on Neural Networks,
			 Vol. 1, pp.118-125, January 15-19, 1990,
			 Washington D.C., Lawerence Erlbaum Publications
   AIC Report No.: AIC-90-013

   Abstract
	   Paradigms for using neural networks (NNs) and genetic algorithms
   (GAs) to heuristically solve boolean satisfiability (SAT) problems are
   presented. Since SAT is NP-Complete, any other NP-Complete problem can be
   transformed into an equivalent SAT problem in polynomial time, and solved
   via either paradigm.  This technique is illustrated for hamiltonian
   circuit (HC) problems.

   ==========

   Title: An Analysis of Multi-point Crossover
   Author(s):  William M. Spears and Kenneth A. De Jong
   E-mail Address: spears@aic.nrl.navy.mil, dejong@aic.nrl.navy.mil
   Conference citation:  Proceedings of the Foundations of Genetic Algorithms
			 Workshop, July 1990, Bloomington, IN, Morgan Kaufmann.
   AIC Report No.: AIC-90-014

   Abstract
	   In this paper we present some theoretical results on n-point and
   uniform crossover.  This analysis extends the work from De Jong's thesis,
   which dealt with disruption of n-point crossover on 2nd order schemata.
   We present various extensions to this theory, including:
	   1)	an analysis of the disruption of n-point crossover on kth
		   order schemata;
	   2)	the computation of tighter bounds on the disruption caused
		   by n-point crossover, by handling cases where parents share
		   critical allele values; and
	   3)	an analysis of the disruption caused by uniform crossover
		   on kth order schemata.  The implications of these
		   results on implementation issues and performance are
		   discussed, and several directions for further research are
		   suggested. 

   ==========

   Title: Using genetic algorithms for supervised concept learning
   Author(s): William M. Spears and Kenneth A. De Jong
   E-mail Address: spears@aic.nrl.navy.mil, dejong@aic.nrl.navy.mil
   Conference citation:  Proceedings of IEEE Conference on Tools for
			 Artificial Intelligence TAI '90, Vol. I, pp335-341,
			 November 6-9, 1990, Herndon, VA, IEEE Society Press
   AIC Report No.: AIC-90-015

   Abstract
	   Genetic Algorithms (GAs) have traditionally been used for
   non-symbolic learning tasks. In this paper we consider the application of
   a GA to a symbolic learning task, supervised concept learning from
   examples.  A GA concept learner (GABL) is implemented that learns a
   concept from a set of positive and negative examples. GABL is run in a
   batch-incremental mode to facilitate comparison with an incremental
   concept learner, ID5R. Preliminary results support that, despite minimal
   system bias, GABL is an effective concept learner and is quite competitive
   with ID5R as the target concept increases in complexity.

			   ====================

   TO ORDER REPORTS: Place an (X) before each report requested and return
   this form (or a photocopy) to NCARAI LIBRARY, Cathy Wiley, Code 5510, 4555
   Overlook Avenue SW, Washington DC 20375-5000, call 202-767-0018, email
   wiley@aic.nrl.navy.mil, or FAX 202-767-3172.  Please limit requests to one
   copy per report.

   [ ]	AIC-90-001 An investigation into the use of hypermutation as an
   adaptive operator in genetic algorithms having continuous, time-dependent
   nonstationary environments, Helen G. Cobb

   [ ]	AIC-90-002 Genetic-Algorithm-Based Learning, Kenneth A. De Jong

   [ ]	AIC-90-003 An Analysis of the Interacting Roles of Population Size
   and Crossover in Genetic Algorithms, Kenneth A. De Jong and William M.
   Spears

   [ ]	AIC-90-004 Explanations of empirically derived reactive plans,
   Diana F. Gordon and John J. Grefenstette

   [ ]	AIC-90-005 Active bias adjustment for incremental, supervised
   concept learning, Diana F. Gordon

   [ ]	AIC-90-006 Genetic algorithms and their applications, John J.
   Grefenstette

   [ ]	AIC-90-007 Learning sequential decision rules using simulation
   models and competition, John J. Grefenstette, Connie L. Ramsey and Alan C.
   Schultz

   [ ]	AIC-90-008 Strategy acquisition with genetic algorithms, John J.
   Grefenstette

   [ ]	AIC-90-009 Competition-based learning for reactive systems, John
   J. Grefenstette

   [ ]	AIC-90-010 Simulation-assisted learning by competition: Effects of
   noise differences between training model and target environment, Connie
   Loggia Ramsey, Alan C. Schultz and John J. Grefenstette

   [ ]	AIC-90-011 Improving tactical plans with genetic algorithms, Alan
   C. Schultz and John J. Grefenstette

   [ ]	AIC-90-012 An Analysis of Multi-point Crossover, Kenneth A. De
   Jong and William M. Spears

   [ ]	AIC-90-013 Using genetic algorithms for supervised concept
   learning, William M. Spears and Kenneth A. De Jong

   [ ]	AIC-90-014 Using neural networks and genetic algorithms as
   heuristics for NP-complete problems, William M. Spears

------------------------------
End of Genetic Algorithms Digest
******************************
