Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!udel!gatech!howland.reston.ans.net!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: [Q] Function approximation using neural nets?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D3LqG0.5DK@unx.sas.com>
Date: Mon, 6 Feb 1995 23:26:23 GMT
References:  <3h375o$bnr@masala.cc.uh.edu>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 29


In article <3h375o$bnr@masala.cc.uh.edu>, cosc19ub@menudo.uh.edu (cosc19ub) writes:
|> ...
|>   In classic approximation theory, arbitrary nonlinear functions can be
|>   approximated by a linear combination of 'powerful' nonlinear basis functions
|>   like orthogonal and spline.
|>
|>   In NN paradigm, radial basis network is one of universal approxs. I would
|>   like to know what else networks have the following features as RBFN.
|>
|>    1. 2-layer structure (1 hidden)
|>    2. weights only exist in hidden-output connections.
|>    3. the output is a linear combination of hiddent outputs
|>
|>   Thus, the learning can be treated as a linear regression.
|>
|>   Can anybody point me out what kind of networks satisfy the above
|>   requirements or what else activation functions for hidden units can
|>   achieve such goal?

Any method from classic approximation theory that uses linear
combinations of basis functions can be set up as a neural network--
polynomials, splines, trigonometric functions, whatever.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
