Newsgroups: comp.ai.genetic
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!satisfied.apocalypse.org!news.mathworks.com!uunet!hearst.acc.Virginia.EDU!murdoch!fulton.seas.Virginia.EDU!jpi6g
From: jpi6g@fulton.seas.Virginia.EDU (James P. Ignizio)
Subject: Using GA to Optimize Neural Nets
X-Nntp-Posting-Host: fulton.seas.virginia.edu
Message-ID: <D69KG1.LKD@murdoch.acc.Virginia.EDU>
Sender: usenet@murdoch.acc.Virginia.EDU
Organization: University of Virginia
Date: Thu, 30 Mar 1995 17:26:25 GMT
Lines: 69

In response to the query posted by Matthew Lybanon:

While there has been a great deal about the use of genetic
algorithms to train neural networks, there is something rather
important that seems to be lost in all these discussions --- or
at least this is my perception of the matter. This is the fact
that genetic algorithms are (despite the somewhat loose and
reckless use of the word *optimization* in the GA literature)
nothing more than heuristic methods. As such, their use in 
*solving* the nonlinear search problems required to
determine network training weights is but one alternative to
the employment of any one of a host of other (heuristic)
nonlinear search methods (e.g., the method of Hooke and
Jeeves, from the early 1960s would be but one alternative)
And I have seen little indication of any attempt to make any
valid comparisons between the performance of genetic
algorithms and these other, albeit less sexy, alternatives.
However, in the limited amount of time we (my students and
I) have devoted to this topic, we have observed that at least
some of these older, less well known heuristic methods
would certainly appear to do as well or better than genetic
algorithms --- at least with regard finding training weights.

I might also mention that it is quite possible (and relatively
easy) to transform the nonlinear models used to search for
training weights into equivalent linear models --- which may
then be searched by exact, linear search tools (i.e., linear
programming) for optimal weights. This has been
accomplished by Roy (ORSA Jrl of Computing, vol 3, no 1,
1991), by Mangasarian (Operations Research, vol 13, 1965
and IEEE Trans Info Theory, IT-14, 1968).The work of
Won Baek and myself describes a more recent version of
such a concept and appears in: Ignizio and Baek; An
Alternative Neural Network Architecture and Training
Algorithm, Jrl of Artificial Neural Networks, Vol 1, No 2,
1994 and in: Ignizio and Cavalier, LINEAR
PROGRAMMING, Prentice-Hall, 1994. Since the
publication of this paper, this approach has been further
refined and further evaluated and would certainly seem to
possess some significant advantages over heuristic methods
(including genetic algorithms).

More recently, we have found that genetic algorithms would
seem to be better fitted to handle the simultaneous problem
of network architecture determination AND training weight
determination. This is probably because the primary feature
of this aggregate problem is that of combinatorial search.
The results achieved thus far in this effort (being funded, in
part, by an NSF grant) appear in the paper: Ignizio and
Soltys; A Tailored Genetic Algorithm for the Simultaneous
Design and Training of Neural Networks, to be presented in
September at GALESIA '95, in Sheffield, England. Thus
far, we have been extremely pleased with this somewhat
unorthodox employment of genetic algorithms. However, it
is still not clear that the linear programming based
approaches, or the older nonlinear search heuristics, are not
even better candidates for solution. Thus, until someone
actually performs a thorough, scientific comparison of
genetic algorithms with these alternatives, I personally will
reserve any final judgment as to which might be better ---
and/or more appropriate.

Jim Ignizio

--

James P. Ignizio			Phone: 804-924-5394
Professor and Chair			FAX: 804-982-2972
Department of Systems Engineering	Email: ignizio@virginia.edu
