Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!news.mathworks.com!uhog.mit.edu!news.mtholyoke.edu!world!mv!barney.gvi.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Continuity
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DKzo9u.CKI@unx.sas.com>
Date: Wed, 10 Jan 1996 23:37:05 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <4cr06s$ium@highway.leidenuniv.nl>
Organization: SAS Institute Inc.
Lines: 24


In article <4cr06s$ium@highway.leidenuniv.nl>, raaijmakers@rulxho.leidenuniv.nl (Stephan Raaijmakers) writes:
|> Can neural nets learn non-continuous functions?

Yes and no. I assume we are talking about feedforward nets with
continuous (usually tanh or logistic) activation functions.
In fact, feedforward nets without regularization often find near-
discontinuities in noisy data where no actual discontinuities exist.

In the unidimensional case, if the function has a finite number of
discontinuities, you can make the mean squared error arbitrarily small
by using enough hidden units. Each discontinuity requires a hidden
unit with a large weight. However, the approximation is not uniform:
you cannot make the maximum absolute error arbitrarily small.

In the multidimensional case, you can obviously fit _some_ kinds of
discontinuities, but I don't know of any general result that
characterizes what types of discontinuities can be approximated.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
