Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!news.duke.edu!concert!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Stopped Training
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Cw9958.C0A@unx.sas.com>
Date: Sat, 17 Sep 1994 03:46:20 GMT
References: <Cw6xJA.1Gr@unx.sas.com> <1994Sep16.131944.2099@fct.unl.pt>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 45


In article <1994Sep16.131944.2099@fct.unl.pt>, tr@fct.unl.pt (Thomas Rauber) writes:
|> Warren Sarle (saswss@hotellng.unx.sas.com) wrote:
|> :  * Cross-validation (leave one out)--slow and erratic
|>                                                 ^^^^^^
|> Would you please specify in more detail this attribute of the leave-one-out
|> estimate. ...
|>
|> From
|>   Devijver, P. A., and Kittler, J., "Pattern Recognition --- A Statistical
|>   Approach," Prentice/Hall Int., London, 1982. page 356:
|>
|> .. The leave-one-out error estimate has been found experimentally to be
|> approximated unbiased, whatever be the classifier and the underlying
|> distributions. ...

It is approximately unbiased in many cases, although there are some
cases where it fails because the estimator is not a smooth enough
function of the data and deleting one or a few cases is not a large
enough perturbation.

In particular, the misclassification rate is not a continuous
function, and continuous error-rate estimators such as posterior
probability estimators work better in combination with leave-one-out.
See the discussion under the DISCRIM procedure in the _SAS/STAT User's
Guide_ on posterior probability error rate estimators.

|> .. In counterpart the leave-one-out method suffers from at least two
|> disadvantages ... increase in the variance of the estimator ...excessive
|> computation. ...

The leave-one-out estimate of the error rate is extremely variable,
sometimes so much so as to be unusable. See:

   Efron, B. (1982) _The Jackknife, the Bootstrap and Other Resampling
   Plans_, SIAM: Philadelphia.

   Efron, B. and Tibshirani, R.J. (1993), _An Introduction to the
   Bootstrap_, Chapman & Hall: NY.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
