Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!news.sprintlink.net!crash!pzcc.bitenet!news
From: Duane DeSieno <duaned@cts.com>
Subject: Re: Confidence
Organization: /etc/organization
Date: Tue, 24 Jan 1995 04:30:32 GMT
Message-ID: <D2w76w.G9o@crash.cts.com>
References: <3fuolk$39o@sunb.ocs.mq.edu.au> <3g0800$kk7@ixnews3.ix.netcom.com>
Sender: news@crash.cts.com (news subsystem)
Nntp-Posting-Host: loci.cts.com
Lines: 27

> In <3fuolk$39o@sunb.ocs.mq.edu.au> jondarr@macadam.mpce.mq.edu.au 
> (jondarr c g h 2 gibb) writes: 
> 
> >
> >Hello All,
> >   I was wondering if anyone had any references to papers where a
> >"confidence" was placed on elements of a training set, such that 
> >some patterns which `may be wrong' are not treated as being as
> >important to train on. I would like to use this technique within
> >a backprop algorithm, somehow, but I haven't thought it through 
> >enough to place more restrictions (such as architecture and 
> >heuristics governing the net).
> >


One method is to use a more robust error criteria.  Mean absolute
error tends to ignor outliers more than mean squared error.  We
have implemented MAE in our Thinks product along with mean 4th power
error to force inclusion of outliers.

Duane DeSieno
Logical Designs
2015 Olite Ct.
La Jolla, CA 92037
(619)459-6236
duaned@cts.com

