Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!godot.cc.duq.edu!newsgate.duke.edu!news.mathworks.com!newsfeed.internetmci.com!in2.uu.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: NN are basis functions?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DuAvK1.1GG@unx.sas.com>
Date: Tue, 9 Jul 1996 23:50:25 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <4rll1u$5sr@ns1.nl.cis.philips.com>
Organization: SAS Institute Inc.
Lines: 30


In article <4rll1u$5sr@ns1.nl.cis.philips.com>, T.W.J.Jakobs@nl.cis.philips.com (Jakobs) writes:
|> To my opinion NN can be regarded as basis functions somewhat familiar
|> to the sine/cosine basisfunctions used in fourier transformation.
|> I have search for literature that adresses this similarity, but couldn't find
|> much.

That specific connection is shown in: 

   Gallant, A.R. and White, H. (1988), "There exists a neural network
   that does not make avoidable mistakes," IEEE Second International
   Conference on Neural Networks, San Diego: SOS Printing, I, 657-664,
   reprinted in White, H. (1992), Artificial Neural Networks:
   Approximation and Learning Theory, Oxford: Blackwell.

More generally, there are numerous articles on NNs as "universal
approximators". For example:

   Hornik, K., Stinchcombe, M. and White, H. (1989), "Multilayer
   feedforward networks are universal approximators," Neural Networks,
   2, 359-366.

   Hornik, K. (1993), "Some new results on neural network 
   approximation," Neural Networks, 6, 1069-1072.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
