Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!udel!gatech!concert!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Comments need for"Neural Networks in C++" by Adam Blum
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Cz3Ar7.CzB@unx.sas.com>
Date: Fri, 11 Nov 1994 06:16:19 GMT
References:  <39ubal$6i0@news.iastate.edu>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 74


In article <39ubal$6i0@news.iastate.edu>, chu_c@iastate.edu (Chao-Hsien Chu) writes:
|>
|> I have a paper copy of the book "Neural Networks in C++"
|> by A. Blum. I recalled that someone comment the book and the
|> program before, but I did not catch on it. Can anyone comment
|> the usefulness of the source code again? (half of book is the
|> source)?

Subject: Re: A good book about NN in C++
Date: Sat, 3 Sep 94 18:07:30 EDT

Timothy Masters has a new book out:

   Masters, T. (1994), _Signal and Image Processing with Neural
   Networks: A C++ Sourcebook_, Wiley. $44.95 including diskette.

This has lots of neat stuff that you won't find in most NN books,
with emphasis on the complex domain. Masters's previous NN book
is the best source of practical advice on NNs that I know of:

   Masters, T. (1993), _Practical Neural Network Recipes in C++_,
   Academic Press.

By publishing Masters's new book, Wiley has redeemed itself for
two atrocious publishing mistakes:

   Blum, Adam (1992), _Neural Networks in C++_, Wiley.

   Welsread, Stephen T. (1994), _Neural Network and Fuzzy Logic
   Applications in C/C++_, Wiley.

Both Blum and Welstead contribute to the dangerous myth that any
idiot can use a neural net by dumping in whatever data are handy
and letting it train for a few days. They both have little or no
discussion of generalization, validation, and overfitting. Neither
provides any valid advice on choosing the number of hidden nodes.
If you have ever wondered where these stupid "rules of thumb" that
pop up frequently come from, here's a source for one of them:

   "A rule of thumb is for the size of this [hidden] layer to be
   somewhere between the input layer size ... and the output layer
   size ..."  Blum, p. 60.

Blum offers some profound advice on choosing inputs:

   "The next step is to pick as many input factors as possible that
   might be related to [the target]."

Blum also shows a deep understanding of statistics:

   "A statistical model is simply a more indirect way of learning
   correlations. With a neural net approach, we model the problem
   directly." p. 8.

(BTW, I had concluded that Blum's book was worthless long before I
found _that_ gem.)

Blum at least mentions some important issues, however simplistic his
advice may be. Welstead just ignores them. What Welstead gives you is
code--vast amounts of code. I have no idea how anyone could write _that_
much code for a simple feedforward NN.  Welstead's approach to
validation, in his chapter on financial forecasting, is to reserve
_two_ cases for the validation set!

There is another C++ NN book by Rao & Rao, but it has very little
material on feedforward NNs, which account for the majority of
practical applications.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
