Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!news.kei.com!world!mv!barney.gvi.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Normalised or UnNormalised Inputs ??
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DKooH6.33D@unx.sas.com>
Date: Fri, 5 Jan 1996 01:07:54 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <4cas3d$gne@ntuix.ntu.ac.sg>
Organization: SAS Institute Inc.
Lines: 36


In article <4cas3d$gne@ntuix.ntu.ac.sg>, chiapl@bbs.sas.ntu.ac.sg (Chia Puay Long) writes:
|> I am posting this article on behalf of Karine as she is facing some problems
|> with her news reader.
|> ==========================================================================
|> ...
|> I test with one example the response of BP for a training set and for the
|> same training set but with normalised data. In this case:
|> the convergence is BETTER for the NON NORMALIZED set.
|> With NORMALIZED INPUTS, BP doesn't find any solution (BP is absolutly wrong).
|>
|> Maybe could someone in this news group tell me if he also observed this
|> phenomenon.

Depends on what you mean by "normalize". If you mean an operation that is
applied to each case, such as dividing all the inputs by the root-mean-square
of the inputs for that case, then you are throwing away information about
the magnitude of the inputs. In some applications, that information may be
of crucial importance, and normalizing cases that way can cause the net to
fail to learn anything useful.

If you mean an operation that is applied to each input across cases, such
as dividing an input by the standard deviation of the input across all
training cases, then that should not cause any problem. After all, if you
divide an input by a constant, you can multiply all of its weights by
the same constant, and the outputs are not affected. Even nonlinear monotone
transformations should not cause serious problems, because the net can
invert the transformation if you have enough hidden units. If this type
of normalization causes problems, it's probably just a matter of unlucky
initial values.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
