Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!gatech!psinntp!psinntp!psinntp!psinntp!megatest!news
From: Dave Jones <djones>
Subject: Re: Radial Basis Neural Networks
Content-Type: text/plain; charset=us-ascii
Message-ID: <DLvIuL.DC6@Megatest.COM>
To: alumno3@alumno3.vnet.es
Sender: news@Megatest.COM (News Admin)
Nntp-Posting-Host: pluto
Content-Transfer-Encoding: 7bit
Organization: Megatest Corporation
References: <4e8lso$b4g@minerva.ibernet.es>
Mime-Version: 1.0
Date: Sun, 28 Jan 1996 04:23:08 GMT
X-Mailer: Mozilla 1.1N (X11; I; SunOS 5.4 sun4m)
X-Url: news:4e8lso$b4g@minerva.ibernet.es
Lines: 55

I'm no expert in this subject. With that caveat, I offer this opinion:

Your immediate problem has nothing to do with the "curse of dimensionality"
as another poster said, but you will have to deal with the curse eventually.

I don't know what you mean by "the number of patterns", but I suspect that the
routine "solverb" is using a correlation matrix to solve for radial basis
parameters. There are other ways to do that which would not require an
N X N array, where N is the input dimension.

HOWEVER. This is the curse of dimensionality: To train a non-parametric
network (one that makes no assumptions about the distribution of the data),
you have to give it enough data to hit all the peaks and valleys. The
more dimensions you have, the more places there are for peaks and valleys
to be. The amount of data you need is a function how it is clustered, and the
experts do not agree completely on the figures, but the requirements
are somewhere in these general ranges: With 7 dimensions, you will need
approximately 5,000 to 10,000 training vectors. With 10 dimensions, close to a
million. With (gasp) 10,000 inputs, who knows? More training vectors than there
are electrons in the galaxy, I'm guessing.

The other fellow seems to imply that using a multi-layer perceptron
would solve the curse-of-dimensionality problem. I don't see it. Perhaps
someone could explain how changing the model would help. I don't see why
an MLP would suffer any less from data-starvation than would a RBN. It seems
to me that the problem is not in the choice of non-parametric model,
the problem is in the problem. No non-parametric model can fill in the
gaps if you don't give it enough clues.

Either you must switch to some parametric model based on your knowledge
of the problem space, or you are going to need to do some data reduction
somehow. Principle component analysis might help.


                Good luck
                Dave

alumno3 <alumno3@alumno3.vnet.es> wrote:
>We are developing a proyect about systems control with Radial Basis 
>Neural Networks. Our problem is this:
>
>  When the solverb's routine, belongs to neural toolbox of Matlab, use a 
>NN with two (or more) inputs the number of patterns is the combination of 
>all inputs. Example: 
>
> If each input has 100 different values the pattern's number increase to 
>10000. Thus the solverb's routine make a matrix with 10000x10000 elements 
>and causes problems of memory.
>
>Please, if anyone know a different routine to solve this problem. Send 
>the routine to:
>
>                   vic@vgg.sci.uma.es
>

