Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!goldenapple.srv.cs.cmu.edu!bb3.andrew.cmu.edu!nntp.sei.cmu.edu!news.cis.ohio-state.edu!news.maxwell.syr.edu!cpk-news-hub1.bbnplanet.com!news.bbnplanet.com!news-peer.sprintlink.net!news-sea-19.sprintlink.net!news-in-west.sprintlink.net!news.sprintlink.net!Sprint!199.72.1.20!interpath!news.interpath.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: I am looking for information about ANNs for use in a research paper...
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <ECrM6D.9Ix@unx.sas.com>
Date: Thu, 3 Jul 1997 22:51:49 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <33a8352c.0@news.inreach.com> <5p35h3$fa2$1@news8.gte.net>
Organization: SAS Institute Inc.
Lines: 60


In article <5p35h3$fa2$1@news8.gte.net>, "Allan R. Baker" <arbaker@gte.net> writes:
|> Check out
|> Caudill, Maureen, and Charles Butler, "Naturally Intelligent Systems." 
|> Cambridge, MA: The MIT Press/Bradford Books, 1990.
|> 
|> A wonderfully detailed layman's book.  It is written in plain English and
|> is very non technical but gives you an understanding of the underlying
|> principles and the history of development and accomplishments of many types
|> of natural intelligence (neural networks).

Actually, it's a remarkably bad book, although it has narrowly
escaped my worst-book list in the FAQ.

The authors try to translate mathematical formulas into English. The
results are likely to disturb people who appreciate either mathematics
or English. Have the authors never heard that "a picture is worth a
thousand words"?  What few diagrams they have (such as the one on p. 74)
tend to be confusing.  Their jargon is peculiar even by NN standards;
for example, they refer to target values as "mentor inputs" (p. 66).
The authors do not understand elementary properties of error functions
and optimization algorithms. For example, in their discussion of the
delta rule, the authors seem oblivious to the differences between batch
and on-line training, and they attribute magical properties to the
algorithm (p. 71):

   [The on-line delta] rule always takes the most efficient route from
   the current position of the weight vector to the "ideal" position,
   based on the current input pattern. The delta rule not only
   minimizes the mean squared error, it does so in the most efficient
   fashion possible--quite an achievement for such a simple rule.

While the authors realize that backpropagation networks can suffer
from local minima, they mistakenly think that counterpropagation has
some kind of global optimization ability (p. 202):

   Unlike the backpropagation network, a counterpropagation network
   cannot be fooled into finding a local minimum solution. This means
   that the network is guaranteed to find the correct response (or the
   nearest stored response) to an input, no matter what.

But even though they acknowledge the problem of local minima, the
authors are ignorant of the importance of initial weight values (p. 186):

   To teach our imaginary network something using backpropagation, we
   must start by setting all the adaptive weights on all the neurodes
   in it to random values. It won't matter what those values are, as
   long as they are not all the same and not equal to 1.

Like most introductory books, this one neglects the difficulties of
getting good generalization--the authors simply declare (p. 8) that "A
neural network is able to generalize"!


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
* Do not send me unsolicited commercial, political, or religious email *
