Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!nntp.sei.cmu.edu!news.cis.ohio-state.edu!math.ohio-state.edu!howland.reston.ans.net!newsfeed.internetmci.com!cyberspace.com!news.sprintlink.net!news-stk-3.sprintlink.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: simple question
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Dt81vG.Gqp@unx.sas.com>
Date: Wed, 19 Jun 1996 00:40:28 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <31C3249B.7C36@onlink.net>
Organization: SAS Institute Inc.
Lines: 26


In article <31C3249B.7C36@onlink.net>, Maneesh Yadav <wfss16@onlink.net> writes:
|> What kind of architecture do you use to make a NN that will evaluate a simple a linear function like:
|> 
|> y = 0.1x
|> 
|> or a function like (not linear):
|> 
|> y = x^2
|> 
|> Using a backpropgation simulator I tried presenting about 20 inputs with varying archtectures, and the test
|> inputs wouldn't even come close to the expected results?
|> 
|> What am I doing wrong?

My guess would be that your output activation function does not cover
the range of y values in your training and test data. Try an identity
(linear) output activation function. Any of the usual architectures
(MLP, RBF, etc.) should work fine. For y = 0.1x, it will be easier if
you use no hidden units.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
