Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.duke.edu!concert!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: He who knows what he does not know is wise
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Cz47Ay.G4E@unx.sas.com>
Date: Fri, 11 Nov 1994 17:59:21 GMT
References: <parkCyxFB0.5Ko@netcom.com> <Cyz0MF.Jx3@unx.sas.com> <TAP.94Nov10180312@eagle.epi.terryfox.ubc.ca>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 25


In article <TAP.94Nov10180312@eagle.epi.terryfox.ubc.ca>, tap@eagle.epi.terryfox.ubc.ca (Tony Plate) writes:
|>
|> In article <parkCyxFB0.5Ko@netcom.com>, park@netcom.com (Bill Park) writes:
|> |> What are some good ways to get a neural network to report that the inputs
|> |> you gave it are too different from its training set to permit it to
|> |> give you an accurate answer?
|> ...
|> One way of doing this is to use another neural network as an
|> auto-encoder, and then treat the goodness of reconstruction
|> of a new pattern as a measure of familiarity.  The idea is
|> that the auto-encoder will only be able to accurately
|> reconstruct patterns it is familiar with.

That idea, unfortunately, is wrong. Consider principal components
as the autoencoder. Patterns will be accurately reconstructed if
they are near the subspace spanned by the components, regardless
of how far they are from the training data.


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
