Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!news.mathworks.com!newsfeed.internetmci.com!howland.erols.net!news.sprintlink.net!news-stk-200.sprintlink.net!news.sprintlink.net!news-stk-11.sprintlink.net!interpath!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: How do neuralnets work?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DwIAKA.8A6@unx.sas.com>
Date: Wed, 21 Aug 1996 21:04:58 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <4u6svn$s25@news.tmx.com.au> <4uruta$f2j@newsbf02.news.aol.com> <32194D5C.6DE3@dial.pipex.com> <4vdqbi$9j3@sjx-ixn5.ix.netcom.com>
Organization: SAS Institute Inc.
Lines: 25


In article <4vdqbi$9j3@sjx-ixn5.ix.netcom.com>, jdadson@ix.netcom.com(Jive Dadson) writes:
|> 
|> I have yet to see what I consider to be a good easy read introduction
|> to neural nets. Easy read, yes. Good, yes. But alas not both at once.

True. I think those goals are not entirely compatible.

|> ...
|> So, I'll venture a definition: A (single) neural net is a non-linear,
|> predictive, maximum penalized likelihood estimator, based on a sample
|> representative of the population, and the assumption of exponential-
|> family variance that is uniform across the population.

That's unnecessarily restrictive, since it excludes some popular NN
training approaches like early stopping (see the FAQ), as well as some
unpopular be potentially useful models like error functions based on
normal mixtures (see Bishop's book, listed in part 4 of the FAQ).


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
