Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.sprintlink.net!howland.reston.ans.net!vixen.cso.uiuc.edu!uchinews!kimbark!gal2
From: gal2@kimbark.uchicago.edu (Jacob Galley)
Subject: Re: Measuring nonlinearity
X-Nntp-Posting-Host: midway.uchicago.edu
Message-ID: <DF9Es4.J66@midway.uchicago.edu>
Sender: news@midway.uchicago.edu (News Administrator)
Reply-To: gal2@midway.uchicago.edu
Organization: University of Chicago
References: <alwang.170.021FAD3C@eniac.seas.upenn.edu>
Date: Thu, 21 Sep 1995 14:26:27 GMT
Lines: 27

alwang@eniac.seas.upenn.edu (Teen Age Riot) writes:
>
>I suppose this doesn't fall directly under neural networks, but I'm sure it's 
>a subject considered by many others in the field: given a training data set, 
>is there anyway to numerically evaluate the linearity of the system?  This 
>would be useful in determing the number of layers and nodes necessary for a 
>network to learn the data.

There is a test called, the BDS statistic, which tests the null
hypothesis that a time series is independent and identically
distributed (ie, iid, no pattern, "random"), against an unspecified
alternative (such as some kind of nonlinear process).  I don't know if
this test can be directly applied to NN research.  Well, I guess if
the test indicates that your data is iid, then you know it's time to
give up on predicting anything in it.  Otherwise, I don't think the
test will bring any revelations on how to design your neural network
system.  But reading about how the BDS statistic works is interesting.
Here's the reference:

William A. Brock, David A. Hsieh and Blake LeBaron, _Nonlinear
Dynamics, Chaos, and Instability: Statistical Theory and Economic
Evidence_, (MIT 1991).

Jake.

-- 
Reinheitsgebot <-- "Keep your laws off my beer!" <-- gal2@midway.uchicago.edu
