Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!gatech!newsfeed.pitt.edu!dsinc!netnews.upenn.edu!msunews!harbinger.cc.monash.edu.au!news.cs.su.oz.au!metro!news
From: Alison Lennon <A.Lennon@biochem.usyd.edu.au>
Subject: Re: Multiple networks?
Content-Type: text/plain; charset=us-ascii
Message-ID: <D8yC67.HxD@ucc.su.OZ.AU>
Sender: news@ucc.su.OZ.AU
Nntp-Posting-Host: lennon.biochem.usyd.edu.au
Content-Transfer-Encoding: 7bit
Organization: Department of Biochemistry, University of Sydney
References: <komodoD8H7ou.236@netcom.com> <D8r8Ko.JAB@ucc.su.OZ.AU> <D8sFyB.3Go@unx.sas.com>
Mime-Version: 1.0
Date: Sun, 21 May 1995 23:34:55 GMT
X-Mailer: Mozilla 1.1N (Windows; I; 32bit)
Lines: 29

saswss@hotellng.unx.sas.com (Warren Sarle) wrote:
>
>In article <D8r8Ko.JAB@ucc.su.OZ.AU>, Alison Lennon <A.Lennon@biochem.usyd.edu.au> writes:
>|> ...
>|> Yes, this has been done. Basically, you can train an ensemble
>|> of networks using a different subsample of training (and
>|> validation) data for each network and then, if you're looking
>|> at continuous-valued outputs, you can take either the mean of
>|> median (if appropriate) of the ensemble outputs as your
>|> estimate. The following reference might be useful:
>|>
>|> Perrone, M.P. & Cooper, L.N (1993) When networks disagree:
>|> Endemble methods for hybrid neural networks. In: Neural
>|> Networks for Speech and Image Processing (Mammone, R.J.,ed)
>|> Chapman-Hall.
>
>Using an unweighted mean or median is not a good idea. It is easy
>to construct cases with local optima where you get a few networks
>that predict well and lots of networks that predict badly. You
>need to use a weighted mean that takes into account that some of
>the networks are better than others.
>
Yes, that does sound sensible, but how would you weight the 
mean? You can eliminate some of the poorly performing networks 
by excluding them from the ensemble if their validation error 
is above a required value.



