Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.duke.edu!concert!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: # of hidden nodes for Radial Basis Function Networks ?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Cy7Bpu.C31@unx.sas.com>
Date: Mon, 24 Oct 1994 23:53:54 GMT
References:  <1994Oct23.165843.30929@cc.usu.edu>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 42


In article <1994Oct23.165843.30929@cc.usu.edu>, Rutvik Desai <rutvik@sys3.cs.usu.edu> writes:
|>      I am working on a character classification problm
|> using backprop and radial basis function networks, using Neuralware.
|> I am somewhat familiar with backprob, but not much with RBFNs.
|> Is there any rule of thumb for finding #of nodes in prototype
|> layer in RBFNs ?

For some kinds of RBF nets (kernel methods), there are rules of
thumb for the smoothing parameters, but for the kind you are
dealing with, I am not aware of any useful ones. However, there
are methods for choosing the number of prototype nodes. In particular,
since some RBF nets are linear models, all the statistical theory
for linear models applies. See my 22 Oct 94 post on "Re: Measuring
generalization."

|> Seems to me that they use much more hidden nodes
|> in RBFN than in BP.

It is true that RBF nets often need more prototype nodes than MLP nets
need hidden nodes, but it is usually more meaningful to compare
networks in terms of number of weights.

|> I have 35 input features and 62 output classes.
|> Typically, I would try 60-80 hidden nodes for BP. Example of RBFN given
|> with Neuralware uses 20 nodes in prototype layer for only 2 i/p
|> and 3 o/ps, so I was wondering if you need to use many more
|> hidden (i.e. prototype layer) nodes for RBFN than in BP. I have
|> tried around 200 hidden nodes for RBFN, but performance is much
|> worse than BP (90% vs 70%). Are there any other pitfalls or
|> fine tuning that should be taken care of for RBFNs ? Any
|> comments will be appriciated.

RBF nets suffer much more than MLPs from the "curse of dimensionality",
so it is important to select relevant inputs and scale them reasonably.


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
