Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!godot.cc.duq.edu!newsgate.duke.edu!news.mathworks.com!newsfeed.internetmci.com!in2.uu.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: FAQ reorganization
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Ds5n5H.G7I@unx.sas.com>
Date: Wed, 29 May 1996 06:53:41 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 32


I have reorganized the FAQ slightly to keep each part under 100K.

Part 2, "Learning", has been split into two parts, "Learning" and
"Generalization", the latter part now being Part 3.

The old Part 3, "Information Resources", has been combined with
the old Part 4, "Datasets"; these two combined are now Part 4,
"Books, data, etc.".

These changes are not yet reflected in the subject lines of the
postings for bureaucratic reasons.

There are some new answers:
   Part 2: "How do MLPs compare with RBFs?"
   Part 3: "How is generalization possible?"
           "How does noise affect generalization?"
           "How many hidden layers should I use?"
None of these answers is the final word by any means, especially
"How is generalization possible?" Perhaps someone who knows more
than I about PAC theory and such things from the machine learning
literature could contribute. But we get so many questions from
beginners who think that generalization is some sort of magic,
I thought something needed to be said on the subject, however
incomplete.


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
