Message-ID: <002302Z24081996@anon.penet.fi>
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!newsflash.concordia.ca!sunqbc.risq.net!calvin.risq.qc.ca!news.mcgill.ca!mcrcim.mcgill.edu!bloom-beacon.mit.edu!news-res.gsl.net!news.gsl.net!swrinde!howland.erols.net!cam-news-hub1.bbnplanet.com!news.mathworks.com!newsfeed.internetmci.com!in2.uu.net!EU.net!news.eunet.fi!anon.penet.fi
Newsgroups: comp.ai.neural-nets
From: an218829@anon.penet.fi
X-Anonymously-To: comp.ai.neural-nets
Organization: Anonymous forwarding service
Reply-To: an218829@anon.penet.fi
Date: Sat, 24 Aug 1996 00:14:12 UTC
Subject: NN FAQ According to Warren Sarle & SAS Institute
Lines: 50


In article <Dw7956.IIo@unx.sas.com>,
Warren Sarle <saswss@hotellng.unx.sas.com> wrote:
>
>Comments from the FAQ, from unidentified c.a.n-n readers:
>
>
>the authors do not understand elementary properties of error functions
>and optimization algorithms. Like most introductory books, this one
>neglects the difficulties of getting good generalization--the authors
>simply declare (p. 8) that "A neural network is able to generalize"!

[other typical Warren Sarle criticisms omitted]
 
Warren seems to delight in criticizing NN books, which has its place, I 
just wonder if that place is the NN FAQ. Maybe it should be called
"The NN FAQ According to Warren Sarle & SAS Institute".

I suppose I am somewhat disturbed by the fact that the NN FAQ is now in the
hands of a biased commercial entity, but afterall, Warren doesn't get paid
for his contributions to the NN group... or does he?
 
And Warren makes some rather poor comments on occasion too, not that I keep
track. But I do recall more than a few posts declaring K-means to be a
superior clustering algorithm based the paper:

   Balakrishnan, P.V., Cooper, M.C., Jacob, V.S., and Lewis, P.A. (1994)
   "A study of the classification capabilities of neural networks using
   unsupervised learning: A comparison with k-means clustering",
   Psychometrika, 59, 509-525.
 
which used a validity metric based on *class recovery* of the *iris data*!

K-means may well be superior, but deciding that it is based on its ability
to recover original class assignments of the iris data is not evidence of it.
(Oops, have I stooped?)

My point?:

   The NN FAQ should be a consensus of the comp.ai.nn community. When it is
   the consensus of the comp.ai.nn that a book sucks, then and only then
   should it appear in the FAQ.

Respectfully,
moi
--****ATTENTION****--****ATTENTION****--****ATTENTION****--***ATTENTION***
Your e-mail reply to this message WILL be *automatically* ANONYMIZED.
Please, report inappropriate use to                abuse@anon.penet.fi
For information (incl. non-anon reply) write to    help@anon.penet.fi
If you have any problems, address them to          admin@anon.penet.fi
