Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!swrinde!hookup!news.mathworks.com!news.duke.edu!concert!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: He who knows what he does not know is wise
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Cz67F8.IoM@unx.sas.com>
Date: Sat, 12 Nov 1994 19:57:08 GMT
Distribution: usa
References: <parkCyxFB0.5Ko@netcom.com> <Cyz0MF.Jx3@unx.sas.com> <x05XzZ-.predictor@delphi.com> <Cz0yJq.GFJ@unx.sas.com> <GJOHN.94Nov10192217@elaine43.Stanford.EDU> <Cz48IG.JIJ@unx.sas.com> <GJOHN.94Nov11155633@elaine36.Stanford.EDU>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 50


In article <GJOHN.94Nov11155633@elaine36.Stanford.EDU>, gjohn@elaine36.Stanford.EDU (George John) writes:
|>
|> I thought I was doing a reasonable job of explaining Andreas Weigend's
|> work:
|>
|>    |> APPROACH 1) Learn many neural nets...
|>    |> The many nets are trained using early stopping, and each net uses a
|>    |> different holdout set so that the nets do tend to learn different
|>    |> functions.
|>
|> But Warren Sarle's remarks give evidence to the contrary.  Warren
|> criticized my description of their algorithm
|>
|> >   This approach is flat-out wrong and is a prime example of the mistakes
|> >   people are likely to make when they are ignorant of statistics. The
|>
|> and ended by recommending bootstrapping instead.
|>
|> The title of that paper was actually "Evaluating Neural Network
|> Predictors by Bootstrapping", so maybe Warren would be happy with the
|> paper after all.

Perhaps I would. I did not interpret the phrase "different holdout sets"
to mean bootstrapping, although looking back at it now I can see that's
what was intended.

|> I am under the impression that the jury is
|> still out on the estimation of variance by bootstrap,
|> cross-validation, or the repeated learn-and-test methods in nonlinear
|> models.

Bootstrapping is clearly better than cross-validation, although there
are variants of cross-validation that do reasonably well by doing
what is almost the same as bootstrapping. I don't know what "repeated
learn-and-test methods" are, although I may just be suffering from
another attack of temporary stupidity. :-)

|> Sorry if this has gone on too long, but I wanted to clear Andreas'
|> name!

An interesting psychological point: having read several of Weigend's
papers that were statistically naive, I didn't expect him to be doing
something as sensible as bootstrapping!

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
