Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.duke.edu!news-feed-1.peachnet.edu!gatech!howland.reston.ans.net!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Normalizes the input patterns
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D4qFwy.Mrz@unx.sas.com>
Date: Tue, 28 Feb 1995 23:00:34 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <1995Feb25.074319.4024@uxmail.ust.hk> <Pine.SOL.3.91.950227044837.20654A-100000@bingsun1>
Organization: SAS Institute Inc.
Lines: 46


Another lovely example of neural net people not understanding each
other's terminology:

In article <Pine.SOL.3.91.950227044837.20654A-100000@bingsun1>, Scott Hackett <br00372@bingsuns.cc.binghamton.edu> writes:
|>
|> On Sat, 25 Feb 1995, Wong Tsz Cheong wrote:
|>
|> > Hi all,
|> >   By some books, the training patterns are normalized by the following
|> >   equation:
|> >       Max = max value in the training set.
|> >       Min = min value in the training set.
|> >       range = Max - Min
|> >       all value of training patterns = (Old value - min) / range
|> >
|> >   It is fine for training.  However, does it mean we should remember the
|> >   values of Max and Min for recognition?
|> >   If we don't remember them, the values of input nodes after normalization
|> >   are not as same as those in training, although same pattern is used in
|> >   both training and recognition.
...
|> Whatever source you got that normalization fuction from should have their
|> proofreader fired.  The normalization function for a vector of n
|> dimensions is :
|>
|>                                       xi (old)
|>              xi (new) =  -------------------------------
|>                          sqrt( x1^2 + x2^2 + ... + xn^2)
|>
|> where x1 is the first component value of the vector, x2 is the second and
|> xi is the ith component, etc.  This makes a vector unit length, while
|> still keeping the vector's original direction in n-space.  However, this
|> will not work for one dimensional vectors, since normalizing a one
|> dimensional vector to unit length will obviously always be 1.  ...

The first person is talking about normalizing variables (in statistical
jargon), the second is talking about normalizing cases. If you folks
can't keep straight a distinction as basic as this, I don't know if
there is any hope for you. :-)

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
