Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!news.duq.edu!newsgate.duke.edu!interpath!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Bootstrapping vs. Bayesian
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Dx4JKG.D1A@unx.sas.com>
Date: Mon, 2 Sep 1996 21:26:40 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <Dwt19F.2KM@fsa.bris.ac.uk> <3228C4D8.167E@smi.stanford.edu>
Organization: SAS Institute Inc.
Lines: 33


M.J.Ratcliffe@bris.ac.uk wrote:
|> 
|> A colleague has recently started using a bootstrapping algorithm to
|> generate error bars on his network outputs. This seems a relatively
|> straightforward technique, particularly compared to Bayesian methods,
|> which are pretty (nay, very!) tricky.

Bootstrapping is not straightforward, especially not for confidence/
prediction intervals (error bars) nor for linear regression. The issues
become even more complicated in nonlinear models such as many NNs.

|> I know that the Bayesian approach enables a more formal structure to
|> network training, and the setting of decay parameters, and what-not, but
|> (and here's the question at last...) is there any particular
|> difference in the validity of the error bars?

Putting aside extreme philosophical positions, if you could settle on a
definition of "validity", it would take a very expensive simulation
study to address that question--a good topic for several graduate-level
theses.

But there'e also Bayesian boostrapping:

   Newton, M.A. and Raftery, A.E. (1994), "Approximate Bayesian
   inference with the weighted likelihood bootstrap" with discussions,
   J. of the Royal Statistical Society, Series B, 56, 3-48.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
