Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!news.mathworks.com!usenet.eel.ufl.edu!warwick!bris.ac.uk!usenet
From: M.J.Ratcliffe@bris.ac.uk
Subject: Bootstrapping vs. Bayesian
X-Nntp-Posting-Host: pc53.aer.bris.ac.uk
Content-Type: text/plain; charset=us-ascii
Message-ID: <Dwt19F.2KM@fsa.bris.ac.uk>
Sender: usenet@fsa.bris.ac.uk (Usenet)
Content-Transfer-Encoding: 7bit
Organization: University of Bristol, UK
Mime-Version: 1.0
Date: Tue, 27 Aug 1996 16:17:38 GMT
X-Mailer: Mozilla 1.1N (Windows; I; 16bit)
Lines: 17

I suspect that this is a somewhat moronic question to the statisticians 
among readers, but the question is:

A colleague has recently started using a bootstrapping algorithm to 
generate error bars on his network outputs. This seems a relatively 
straightforward technique, particularly compared to Bayesian methods, 
which are pretty (nay, very!) tricky. 

I know that the Bayesian approach enables a more formal structure to 
network training, and the setting of decay parameters, and what-not, but 
(and here's the question at last...) is there any particular 
difference in the validity of the error bars?

Thanks, 

Max

