Genetic Algorithms Digest    Tuesday, 17 January 1989    Volume 3 : Issue 3

 - Send submissions to GA-List@AIC.NRL.NAVY.MIL
 - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL

Today's Topics:
	- Re: Classifier System Problems
	- Re: Constrained Optimization (2)
	- Re: Benchmark Proposal (2)
	- Genetic Algorithms and Neural Networks

--------------------------------

Date: Mon, 9 Jan 89 12:31:48 EST
From: booker@AIC.NRL.NAVY.MIL
Subject: Re: Classifier System Problems

     Michael Hall has posted a series of messages to this list that
express his frustrations with implementing classifier systems and his
doubts about the wisdom of using classifier systems for anything.  It
is difficult to make any substantive response to Hall's charges since
he assumes that everyone knows what he means when he uses the term
"classifier system". I've been distressed to find that many people make
this assumption, so let me briefly outline my view of what the real
issue is. There is a fairly large body of theoretical and empirical
results for genetic algorithms that gives many of us a shared intuition
about what makes for a good implementation. We can look at a problem
and published results and get a rough feel for whether or not the GA is
behaving in a familiar way. In this sense, the term genetic algorithm
can be used in a well understood way even though implementations of a
GA can differ greatly.  This is decidedly *not* the case with
classifier systems.

     Most people identify the term "classifier system" with the
description given in Holland's article from the book "Machine
Learning: An AI Approach Volume 2". That's fine as long as you realize
two important things. First, Holland's description is a generic one
that leaves many questions about implementation unanswered. In
particular, there is very little detail there about how the various
system components interact. Getting this right is *not easy*, and
implementation decisions along these lines cannot be made arbitrarily.
My recent paper in Machine Learning gives a brief overview of what some
of these decisions are and how successful implementations have handled
them. Second, the complete system described by Holland has never been
implemented (as far as I know)! Many of us have different ideas about
which of Holland's constructs are necessary or important. The systems
that have been implemented have either been based on these different
ideas (eg. Wilson,Booker), or have focused on some subset of
capabilities such as default hierarchies, credit assignment, etc. (eg.
Goldberg,Riolo). The bottom line is that if you charge ahead and try to
implement Holland's complete system you are on your own. If your
efforts fail, a thoughtful attempt must be made to apportion blame
between the paradigm and your own intuitions.

     I hope Michael Hall will tell us *what* he implemented and explain
why *his implementation* is so characteristic of all classifier
systems that it justifies his sweeping indictment of the whole
approach. I also hope he will share more details about the tax audit
database problem, since he implies that a classifier system can't solve
it.

Lashon

--------------------------------

Date: Tue, 10 Jan 89 10:55:56 EST
From: gxl@msr.EPM.ORNL.GOV (Gunar Liepins)
Subject: Re: Constrained Optimization (1)

mike hilliard and i have used a pareto formulation (suggested by dave
goldberg) for constrained optimization proplems.  we have a paper submitted
to the or/ai literature that details our results.
gunar liepins             gxl@msr.epm.ornl.gov

--------------------------------

Date: Fri, 13 Jan 89 14:28:17 EST
From: Jon Richardson <richards@UTKCS2.CS.UTK.EDU>
Subject: Re: Constrained Optimization (2)

We have been studying techniques to solve  constrained  optimiza-
tion problems.  Our work with Set Covering Problems have revealed
considerable information about penalty functions.

First, even if you cannot  accept  an  infeasible  solution  (one
which violates constraints), it is best not to construct extreme-
ly harsh functions. The important point to remember is  that  in-
feasible  solutions provide crucial information necessary for the
discovery of valuable, feasible solutions.  Distorting the payoff
information  through  overzealous  penalties will usually lead to
poor solutions.  (It is interesting to note that techniques which
ensure  only  feasible  solutions through specialized initializa-
tion, crossover, and mutation do not make use  of  this  informa-
tion.)   In  summary,  any  strategy  that  reduces the amount of
genetic material available is likely to reduce the  effectiveness
of  a Genetic Search Algorithm.  Try to reconcile such strategies
with the schema theorem!

We have done experiments which demonstrate how a rather "lenient"
penalty  function  performs better than a harsh penalty function,
although it does occasionally fail to find a viable  solution  in
large search spaces.

We have theorized a general strategy of solving constrained prob-
lems  which  takes  most of the guesswork out of the selection of
the "penalty constant".

We have also been experimenting with an entirely different way of
solving constraint problems - using multiobjective evaluation and
selection using VEGA (developed  by  J.D.  Shaeffer,  1984),  and
another  technique  involving pareto fronts which is suggested on
p201 of Goldberg's book.  Results from these techniques have been
mixed, and many issues unexplored.

We hope to get a paper or two to the conference  reporting  these
results (Yikes!  Only 28 more days!)

                               Jon Richardson, et al.

--------------------------------

Date: Mon, 9 Jan 89 12:52:01 EST
From: booker@AIC.NRL.NAVY.MIL
Subject: Re: Benchmark Proposal (1)

   I think it is also important to have a set of standard measures
of performance and evaluation procedures to go along with each test suite.

Lashon

--------------------------------

Date: Sun 15 Jan 89 12:55:25-EST
From: DDAVIS@G.BBN.COM
Subject: Re: Benchmark Proposal (2)

I would like to submit two function optimization problems for
inclusion in the Test Suite that is being assembled through
discussions on this mailing list:


1. NODE COLORING

Parameters of the problem are: 

N -- the number of nodes in the graph; 
W -- a vector of weights of length N, assigning one weight to each node;
C -- a list of nodes connected to each node in the graph; and 
H -- the number of colors available for coloring the nodes in
the graph.

Solutions are expressed as a vector of colorings, with 0
representing no color.

An example specification of such a problem might be (in
English): Color the nodes of a graph arranged with a hub and
four nodes in a circle around the outside, so that no two
adjacent nodes have the same color, so that only two colors are
used, and so that the greatest number of nodes is colored.

Stated in the format above, N = 5, W = {1 1 1 1 1}, 
C = {{2 3 4 5} {1 3 5} {1 2 4} {1 3 5} {1 2 4}}, 
and H = 2.

One of the optimal solutions for this problem is the vector
{0 1 2 1 2}, with total colored weight = 4.  A suboptimal solution is
{1 0 2 0 2}, with total colored weight = 3.

When the architecture is a hub-and-wheel architecture as above
with many wheel nodes, and with weight N on the wheel nodes and
weight N+1 on the hub node, then the problem given only one
color leads to highly suboptimal solutions when a greedy
algorithm is used (only the hub node gets colored).

I have been studying this and more randomized architectures for
this problem with an order-based representation, but there are
plenty of representations that could be used.

The approaches to this problem that I have been using have
engendered some interest on the part of operations research
people, since node coloring is a problem with many real-world
applications and it is NP-complete.



2. BIN-PACKING

Bin packing is a problem that is also NP-complete and that also
has many real-world applications.  Derek Smith and I worked on a
version of this problem at TI, and Derek wrote a paper about it
for the 1985 ICGATA.  In its two-dimensional form, the problem
is this.  One is given:

a bin with dimensions L x W,
n rectangles, each with dimensions Li x Wi, and
a vector W of length n of weights of the n rectangles.  

The problem is to generate an optimal solution vector.  A
solution vector for this problem is a vector of length n, such
that each member of the vector is of the form {O X Y}, such
that:

O is the orientation of the rectangle in the packing: 0 means
the rectangle is not packed; 1 means the rectangle is upright,
and 2 means the rectangle is packed with its long side
horizontal;

X is the x-coordinate of the upper left corner of the rectangle
when placed in the bin in the orientation represented in O;

Y is the y-coordinate of the upper left corner of the rectangle
when placed in the bin in the orientation represented in O.  It
is assumed that the origin coordinate of the bin is at its upper
left corner, and that X and Y values are positive, representing
the distance of each point on the x axis and the y axis from the
origin.

A constraint on solution vectors is that no rectangle can
overlap another in the packing expressed by a solution vector.
That is, assuming all rectangles are packed in orientation 1 for
example, there are no two rectangles Ri and Rj for which these
two conditions obtain: 1. the interval Xi to (Xi + Wi) and the
interval Xj to (Xj + Wj) overlap, and 2. the interval Yi to (Yi
+ Li) and the interval Yj to (Yj + Lj) both overlap.

The evaluation of a solution vector is the sum of the weights of
all rectangles with orientation 1 or 2.

This problem can be solved with a variety of representations.
Derek used an order-based representation with two different
decoders.  One could also use more literal representations.
There is a good deal of epistasis in the problem, and a good
deal of real-world import if the problem is solved well.

The expansion of this problem to more dimensions is obvious.

--------------------------------

Date: Mon, 16 Jan 89 20:53 EST
From: DMONTANA%COOPER@rcca.bbn.com
Subject: Genetic Algorithms and Neural Networks

We have written a paper that may be of interest to a number of readers
on this mailing list:

"Training Feedforward Neural Networks Using Genetic Algorithms"
David J. Montana and Lawrence Davis
BBN Systems and Technologies Corp.
70 Fawcett St.
Cambridge, MA 02138

ABSTRACT

Multilayered feedforward neural networks possess a number of properties
which make them particularly suited to complex pattern classification
problems.  However, their application to some real-world problems has
been hampered by the lack of a training algorithm which reliably finds
a nearly globally optimal set of weights in a relatively short time.
Genetic algorithms are a class of optimization procedures which are good
at exploring a large and complex space in an intelligent way to find
values close to the global optimum.  Hence, they are well suited to the
problem of training feedforward networks.  In this paper, we describe a
set of experiments performed on a set of data from a sonar image
classification problem.  These experiments both 1) illustrate the
improvements gained by using a genetic algorithm rather than backpropagation
and 2) chronicle the evolution of the performance of the genetic algorithm
as we added more and more domain-specific knowledge into it.


In addition to outperforming backpropagation on the network we were
investigating (which had nodes with sigmoidal transfer function), the
genetic algorithm has the advantage of not requiring the nodes to have
differentiable transfer functions.  In particular, it can be applied to
a network with thresholded nodes.

Requests for copies of the paper should be addressed to
dmontana@bbn.com.

Dave Montana

--------------------------------

End of Genetic Algorithms Digest
********************************

