Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!arclight.uoregon.edu!news.sprintlink.net!news-peer.sprintlink.net!interpath!news.interpath.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Help Please!
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <E4uFGC.3KH@unx.sas.com>
Date: Thu, 30 Jan 1997 22:50:36 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <5ci2pu$mgi$1@nargun.cc.uq.oz.au> <5civ61$2m0ka_002@news.intergate.net> <5cjav3$rj1$1@mark.ucdavis.edu>
Organization: SAS Institute Inc.
Lines: 26


In article <5cjav3$rj1$1@mark.ucdavis.edu>, geiger@cs.ucdavis.edu (Phillip George Geiger) writes:
|> ...
|> In _Practical_Neural_Network_Recipies_In_C++_ (T. Masters), the author 
|> says "There is a dangerous common misconception concerning iterative
|> training.  It is that neural networks can be overtrained."
|> 
|> He goes on to say that stopping training when the error bottoms out is
|> "treating the symptom, not the disease" and suggests that you reduce
|> the number of hidden neurons to avoid overfitting, and/or use a larger, 
|> more varied training set.

Subsequent research has shown that Masters was excessively critical of
early stopping (as was I). Early stopping based on a validation set is
an effective way to avoid overfitting, although there are other ways and
probably better ways (Bayesian). See the Neural Network FAQ, part 3 of
7: Generalization, at ftp://ftp.sas.com/pub/neural/FAQ3.html


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
 *** Do not send me unsolicited commercial or political email! ***

