Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!fas-news.harvard.edu!newspump.wustl.edu!news.ecn.bgu.edu!vixen.cso.uiuc.edu!uwm.edu!spool.mu.edu!bloom-beacon.mit.edu!news.kei.com!simtel!harbinger.cc.monash.edu.au!news.cs.su.oz.au!metro!news
From: Alison Lennon <A.Lennon@biochem.usyd.edu.au>
Subject: Re: Multiple networks?
Content-Type: text/plain; charset=us-ascii
Message-ID: <D8r8Ko.JAB@ucc.su.OZ.AU>
Sender: news@ucc.su.OZ.AU
Nntp-Posting-Host: lennon.biochem.usyd.edu.au
Content-Transfer-Encoding: 7bit
Organization: Department of Biochemistry, University of Sydney
References: <komodoD8H7ou.236@netcom.com>
Mime-Version: 1.0
Date: Thu, 18 May 1995 03:33:58 GMT
X-Mailer: Mozilla 1.1N (Windows; I; 32bit)
Lines: 29

komodo@netcom.com (Tom Johnson) wrote:
>I need to get a very precise output from a network with 250 inputs. I 
>will probably be training on a data set of about 1000 (is there an 
>optimum size related to number of inputs?) samples. It would seem to me 
>that if I trained 2 or more networks with identical topography but 
>different starting weights and used a different set(s) of training data 
>with perhaps some data in common; then I could get a more precise result 
>by averaging the outputs or running the outputs through a very small 
>network to get a final result, as one network may be closer to parity than 
>another.
>
Yes, this has been done. Basically, you can train an ensemble 
of networks using a different subsample of training (and 
validation) data for each network and then, if you're looking 
at continuous-valued outputs, you can take either the mean of 
median (if appropriate) of the ensemble outputs as your 
estimate. The following reference might be useful:

Perrone, M.P. & Cooper, L.N (1993) When networks disagree: 
Endemble methods for hybrid neural networks. In: Neural 
Networks for Speech and Image Processing (Mammone, R.J.,ed) 
Chapman-Hall.

Good luck!





