Newsgroups: sci.logic,sci.stat.math,comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!nntp.club.cc.cmu.edu!miner.usbm.gov!news.er.usgs.gov!news1.radix.net!in1.nntp.cais.net!news.mathworks.com!news.sprintlink.net!news-peer.sprintlink.net!news.sprintlink.net!news-pull.sprintlink.net!news.sprintlink.net!news-stk-3.sprintlink.net!interpath!news.interpath.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Output unit scaling ?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <E1v0r3.LHu@unx.sas.com>
Date: Tue, 3 Dec 1996 22:49:51 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <32994424.136C@postoffice.worldnet.att.net> <57bt40$brh@gap.cco.caltech.edu> <32A0FD04.F96@postoffice.worldnet.att.net> <E1sDx4.CsE@cs.nott.ac.uk>
Organization: SAS Institute Inc.
Keywords: Neural Networks Output Units
Followup-To: comp.ai.neural-nets
Lines: 41
Xref: glinda.oz.cs.cmu.edu sci.logic:21136 sci.stat.math:13585 comp.ai.neural-nets:34927


In article <E1sDx4.CsE@cs.nott.ac.uk>, ebx@cs.nott.ac.uk (Edward A G Burcher) writes:
|> ...
|> I am trying to build a 7-10-10-3 feedforward neural net, with full
|> connectivity between successive layers but no (direct) connectivity between
|> non-adjacent layers. I am currently using the standard sigmoid function
|> as my activation function. The problem is that I have training and test data
|> where all seven inputs typically vary in a small range 0-0.3 . I am quite happy
|> to normalise this data; However, my 3 output units are tricky to deal with.
|> One of them varies in the range 200-20000, the second from 100-300 and the
|> third 20-70. Clearly, with such large numbers, the network will find it 
|> difficult (impossible?) to be trained on such data, as my activation function
|> only gives output in the [0,1] range. 

See "Why use activation functions?" and "Should I normalize/standardize/rescale 
the data?" in the Neural Network FAQ, part 2 of 7: Learning, at
ftp://ftp.sas.com/pub/neural/FAQ2.html

|> I have heard it is possible to adapt the
|> sigmoid function to give a nonlinear activation function with a larger range.
|> How is this done exactly, and would it be a suitable technique for solving the
|> problem ?
...
|> I was thinking of something simple as a scaling function such as
|> 
|> (unit value - min value the unit can take ) / ( max value that unit can take - min value )
|> 
|> Is this suitable (assuming all of this is a valid approach) ...

Yes.

I have restricted follow-ups to comp.ai.neural-nets.


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
 *** Do not send me unsolicited commercial or political email! ***

