Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!udel!gatech!howland.reston.ans.net!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Confidence
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D2vnCD.5vr@unx.sas.com>
Date: Mon, 23 Jan 1995 21:21:49 GMT
References:  <3fuolk$39o@sunb.ocs.mq.edu.au>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 30


In article <3fuolk$39o@sunb.ocs.mq.edu.au>, jondarr@macadam.mpce.mq.edu.au (jondarr c g h 2 gibb) writes:
|>    I was wondering if anyone had any references to papers where a
|> "confidence" was placed on elements of a training set, such that
|> some patterns which `may be wrong' are not treated as being as
|> important to train on. I would like to use this technique within
|> a backprop algorithm, ...

If the target variable is quantitative, you can do weighted least
squares instead of ordinary least squares, i.e. weight the squared
error of each training case according to how much confidence you
have in the target value. To be more precise, the weight should be
the reciprocal of the noise variance for each case. Weighted least
squares is described in many textbooks on regression such as

   Sanford Weisberg (1985), _Applied Linear Regression_, NY: Wiley

   Raymond H. Myers (1986), _Classical and Modern Regression with
      Applications_, Boston: Duxbury Press

For classification problems where you have a target variable for
each class, instead of coding the targets as 0/1, code them as
probabilities of belonging to each class. I don't have a handy
reference, but it's a pretty obvious sort of thing.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
