Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech2!news.mathworks.com!news.kei.com!world!mv!barney.gvi.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Programming a Neural Net - knetv3.c [1/2]
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DJ6DDL.3GL@unx.sas.com>
Date: Wed, 6 Dec 1995 17:17:45 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <1995Nov27.042430.21294@il.us.swissbank.com> <49e7m7$3e8@moroni.promedia.net> <F.Slager-0512950119190001@belet01.let.ruu.nl>
Organization: SAS Institute Inc.
Lines: 102


In article <F.Slager-0512950119190001@belet01.let.ruu.nl>, F.Slager@pobox.ruu.nl (Els van der Goot) writes:
|> In article <49e7m7$3e8@moroni.promedia.net>, ph_kosel@cwo.com (Peter H.
|> Kosel) wrote:
|>
|> > >I need to look at some examples of actual neural net code written in
|> > >something like C/C++. Is there some public domain code out there?
|>
|> Try the book written by Stephen T. Welstead: Neural Network and Fuzzy
|> Logic Applications in c/c++;1994; John Wiley & Sons Inc.

Welstead's book is the worst I have ever seen on neural nets. Here is
a recycled post from last year and an ironic follow-up from John Lazzaro:

Subject: Re: A good book about NN in C++
Date: Sat, 3 Sep 94 18:07:30 EDT

Timothy Masters has a new book out:

   Masters, T. (1994), _Signal and Image Processing with Neural
   Networks: A C++ Sourcebook_, Wiley. $44.95 including diskette.

This has lots of neat stuff that you won't find in most NN books,
with emphasis on the complex domain. Masters's previous NN book
is the best source of practical advice on NNs that I know of:

   Masters, T. (1993), _Practical Neural Network Recipes in C++_,
   Academic Press.

By publishing Masters's new book, Wiley has redeemed itself for
two atrocious publishing mistakes:

   Blum, Adam (1992), _Neural Networks in C++_, Wiley.

   Welstead, Stephen T. (1994), _Neural Network and Fuzzy Logic
   Applications in C/C++_, Wiley.

Both Blum and Welstead contribute to the dangerous myth that any
idiot can use a neural net by dumping in whatever data are handy
and letting it train for a few days. They both have little or no
discussion of generalization, validation, and overfitting. Neither
provides any valid advice on choosing the number of hidden nodes.
If you have ever wondered where these stupid "rules of thumb" that
pop up frequently come from, here's a source for one of them:

   "A rule of thumb is for the size of this [hidden] layer to be
   somewhere between the input layer size ... and the output layer
   size ..."  Blum, p. 60.

Blum offers some profound advice on choosing inputs:

   "The next step is to pick as many input factors as possible that
   might be related to [the target]."

Blum also shows a deep understanding of statistics:

   "A statistical model is simply a more indirect way of learning
   correlations. With a neural net approach, we model the problem
   directly." p. 8.

(BTW, I had concluded that Blum's book was worthless long before I
found _that_ gem.)

Blum at least mentions some important issues, however simplistic his
advice may be. Welstead just ignores them. What Welstead gives you is
code--vast amounts of code. I have no idea how anyone could write _that_
much code for a simple feedforward NN.  Welstead's approach to
validation, in his chapter on financial forecasting, is to reserve
_two_ cases for the validation set!

There is another C++ NN book by Rao & Rao, but it has very little
material on feedforward NNs, which account for the majority of
practical applications.

My comments apply only to the _text_ of the above books. I have not
examined or attempted to compile the code.



Date: Sat, 9 Sep 1995 10:31:35 -0700
From: John Lazzaro <lazzaro@cs.berkeley.edu>

> If you have ever wondered where these stupid "rules of thumb" that
> pop up frequently come from, here's a source for one of them:
> 
>    "A rule of thumb is for the size of this [hidden] layer to be
>    somewhere between the input layer size ... and the output layer
>    size ..."  Blum, p. 60.

Last month I reviewed a paper that cited this rule of thumb --
and referenced this book!





 
-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
