Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!udel!gatech!howland.reston.ans.net!math.ohio-state.edu!darwin.sura.net!rmece02.upr.clu.edu!rmece01!shawn
From: shawn@uoregon.edu (Shawn Hunt)
Subject: Re: Confidence
Sender: news@rmece02.upr.clu.edu (NNTP)
Message-ID: <1995Jan24.210205.33122@rmece02.upr.clu.edu>
Date: Tue, 24 Jan 1995 21:02:05 GMT
References: <3fuolk$39o@sunb.ocs.mq.edu.au> <3g0800$kk7@ixnews3.ix.netcom.com> <D2w76w.G9o@crash.cts.com>
Nntp-Posting-Host: rmece01.upr.clu.edu
Organization: Univ of Puerto Rico, Mayaguez Campus
X-Newsreader: TIN [version 1.2 PL2]
Lines: 29

: > In <3fuolk$39o@sunb.ocs.mq.edu.au> jondarr@macadam.mpce.mq.edu.au 
: > (jondarr c g h 2 gibb) writes: 
: > 
: > >
: > >Hello All,
: > >   I was wondering if anyone had any references to papers where a
: > >"confidence" was placed on elements of a training set, such that 
: > >some patterns which `may be wrong' are not treated as being as
: > >important to train on. I would like to use this technique within
: > >a backprop algorithm, somehow, but I haven't thought it through 
: > >enough to place more restrictions (such as architecture and 
: > >heuristics governing the net).
: > >


: One method is to use a more robust error criteria.  Mean absolute
: error tends to ignor outliers more than mean squared error.  We
: have implemented MAE in our Thinks product along with mean 4th power
: error to force inclusion of outliers.

Jack Deller and I wrote a paper submitted to Neural Networks on the subject
of 'confidence' in the elements of the training set. It is entitled
'Selective Training of Feedforward Artificial Neural Networks Using
Matrix Perturbation Theory'. We are waiting notification of acceptance,
but if anyone would like a pre-print copy just send me and email at
shawn@rmece01.upr.clu.edu.

Shawn Hunt
 
