Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Multiple networks?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D8sFyB.3Go@unx.sas.com>
Date: Thu, 18 May 1995 19:10:59 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <komodoD8H7ou.236@netcom.com> <D8r8Ko.JAB@ucc.su.OZ.AU>
Organization: SAS Institute Inc.
Lines: 26


In article <D8r8Ko.JAB@ucc.su.OZ.AU>, Alison Lennon <A.Lennon@biochem.usyd.edu.au> writes:
|> ...
|> Yes, this has been done. Basically, you can train an ensemble
|> of networks using a different subsample of training (and
|> validation) data for each network and then, if you're looking
|> at continuous-valued outputs, you can take either the mean of
|> median (if appropriate) of the ensemble outputs as your
|> estimate. The following reference might be useful:
|>
|> Perrone, M.P. & Cooper, L.N (1993) When networks disagree:
|> Endemble methods for hybrid neural networks. In: Neural
|> Networks for Speech and Image Processing (Mammone, R.J.,ed)
|> Chapman-Hall.

Using an unweighted mean or median is not a good idea. It is easy
to construct cases with local optima where you get a few networks
that predict well and lots of networks that predict badly. You
need to use a weighted mean that takes into account that some of
the networks are better than others.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
