Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!udel!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: How does training with noise effect ... ?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D7t1xA.D71@unx.sas.com>
Date: Sat, 29 Apr 1995 16:31:58 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <3nllru$oqu@jaws.cs.hmc.edu> <3nnk5g$5rg@oslo.uni-paderborn.de> <3norg8$buq@jaws.cs.hmc.edu>
Organization: SAS Institute Inc.
Lines: 28


In article <3norg8$buq@jaws.cs.hmc.edu>, Ian Leicht <ian@cs.hmc.edu> writes:
|> ...
|> The way I envisioned
|> this is that the network is segmenting feature space into different classifications.
|> Since I am only training the network w/ 3 examples of each of the 6 patterns I
|> thought that training w/ noise would be able to aid the network in defining its
|> feature space.  I.e. otherwise maybe it was only "memorizing" the three data points
|> I taught it.
|> ...
|> Would it be correct to say then that if you have sufficient training data
|> (i.e. sufficient compared to the number of weights) training with noise will
|> degrade network performance, however if you do not have enough data, then
|> it will be useful perhaps in the way I described in my first paragraph?

The more training cases you have, the less noise you need. You can see
that from the rough estimate of the noise variance that I posted
yesterday, where n is the number of training cases:

    2    p(Y-XB(0))'(Y-XB(0))
   s   = --------------------
    1      n(n-p)B(0)'B(0)

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
