Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!hookup!swrinde!cs.utexas.edu!news.sprintlink.net!crash!pzcc.bitenet!news
From: Duane DeSieno <duaned@cts.com>
Subject: Re: input value constraints?
Organization: /etc/organization
Date: Mon, 23 Jan 1995 18:09:06 GMT
Message-ID: <D2vEF8.2wA@crash.cts.com>
References:  <3fu4gh$7jp@lyra.csx.cam.ac.uk>
Sender: news@crash.cts.com (news subsystem)
Nntp-Posting-Host: loci.cts.com
Lines: 35

> Silly question, possibly:
> 
> What are the recommended constraints for input values to neural networks re.
> normalisation etc.
> 
> Why must the vector components fall between 0 and 1, for example? surely this
> depends on how the net simulator evaluates "closeness" of vectors e.g. dot
> product vs. Euclidean distance?
> 
> Thomas.
> 
Thomas,

Using the simple back propagation algorithm as an example, if different
inputs have different dynamic ranges, the associated weights will have
effectively different learning rates.  Once the weights of a processing
element grow, a "stalling" condition can occure, hurting the nets
ability to further adapt the weights.

Controlling the range of each input can greatly help in avoiding the
stalling.  The method usually used in to subtract the mean value for an
input and divide by it's standard deviation.  This is so commonly used
that most Neural Net packages(like THINKS) have built in preprocessing.
Don Specht recommends it for use with his general regression neural net.

Hope this helps.

Duane DeSieno
Logical Designs Consulting, Inc.
2015 Olite Ct.
La Jolla, CA  92037
(619)459-6236
duaned@cts.com


