Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news4.ner.bbnplanet.net!cam-news-hub1.bbnplanet.com!uunet!in3.uu.net!news.nevada.edu!news.sprintlink.net!news-ana-7.sprintlink.net!news.sprintlink.net!news-ana-24.sprintlink.net!interpath!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: How do neuralnets work?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Dw7956.IIo@unx.sas.com>
Date: Thu, 15 Aug 1996 22:00:42 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <4u6svn$s25@news.tmx.com.au> <4uruta$f2j@newsbf02.news.aol.com>
Organization: SAS Institute Inc.
Lines: 61


In article <4uruta$f2j@newsbf02.news.aol.com>, jhurst007@aol.com (JHurst007) writes:
|> Neural Computing, Theory and Practice.  Phillip D. Wasserman  1989  Van
|> Nostrand

Comments from the FAQ, from unidentified c.a.n-n readers: 

   "Wasserman flatly enumerates some common architectures from an
   engineer's perspective ('how it works') without ever addressing
   the underlying fundamentals ('why it works') - important basic
   concepts such as clustering, principal components or gradient
   descent are not treated.  It's also full of errors, and unhelpful
   diagrams drawn with what appears to be PCB board layout software
   from the '70s. For anyone who wants to do active research in the
   field I consider it quite inadequate";

   "Okay, but too shallow"; 

   "Quite easy to understand"; 

   "The best bedtime reading for Neural Networks.  I have given this
   book to numerous collegues who want to know NN basics, but who
   never plan to implement anything.  An excellent book to give your
   manager."

What could be more damning than that last sentence?! :-)


|> Naturally Intelligent Systems.  Maureen Caudill and Charles Butler  1990 
|> MIT

My comments from the FAQ:  The authors try to translate mathematical
formulas into English.  The results are likely to disturb people who
appreciate either mathematics or English. Have the authors never heard
that "a picture is worth a thousand words"?  What few diagrams they
have (such as the one on p. 74) tend to be confusing.  Their jargon is
peculiar even by NN standards.  As is evident from claims such as (p.
202):

   Unlike the backpropagation network, a counterpropagation network
   cannot be fooled into finding a local minimum solution. This
   means that the network is guaranteed to find the correct response
   ... to an input, no matter what.

the authors do not understand elementary properties of error functions
and optimization algorithms. Like most introductory books, this one
neglects the difficulties of getting good generalization--the authors
simply declare (p. 8) that "A neural network is able to generalize"!


|> Neural Network Design,  Hagan, Demuth & Beale.  1996  PWS Publishing Co.
|>   Excellent low-down on the mathematics and real design methods.

I haven't seen this one. I wonder what these "real design methods" are.
Anybody have any comments?

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
