Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news4.ner.bbnplanet.net!cpk-news-feed2.bbnplanet.com!cpk-news-hub1.bbnplanet.com!cam-news-hub1.bbnplanet.com!news.mathworks.com!newsfeed.internetmci.com!newsxfer2.itd.umich.edu!agate!newsgate.duke.edu!interpath!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Haykin's RBF Chapter
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DwLz25.FEq@unx.sas.com>
Date: Fri, 23 Aug 1996 20:46:53 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <Pine.SUN.3.91.960820132749.21955A-100000@soma.med.utah.edu> <DwIAAJ.7nv@unx.sas.com>
Organization: SAS Institute Inc.
Lines: 49


In article <DwIAAJ.7nv@unx.sas.com>, saswss@hotellng.unx.sas.com (Warren Sarle) writes:
|> 
|> In article <Pine.SUN.3.91.960820132749.21955A-100000@soma.med.utah.edu>, Kai Kuck <kkuck@soma.med.utah.edu> writes:
|> |> 
|> |> I am planning to use an RBF network for a classification task and
|> |> currently read through Haykin's chapter on RBFs (Simon Haykin: Neural
|> |> Networks - A Comprehensive Foundation, Macmillan Publishing Company,
|> |> Englewood Cliffs, NJ, 1994: pp. 236ff.). 
|> |> 
|> |> For those of you who have read that chapter: Do you understand how this
|> |> Frechet differential works (pp. 247f.) ?  How about the differential
|> |> operator P and its adjoint P_star (p. 246, 249)? Do you understand why he
|> |> is using the Riesz representation (p. 248) ? I am working on Problem 7.10
|> |> (== derivation of equation (7.86)) and am lost, because I can't answer
|> |> these questions. 
|> 
|> My math is far too rusty to answer any of those questions either, but I
|> do not feel the least bit impaired on that account. Those mathematical
|> derivations obviously did not give Haykin any insight into the practical
|> use of RBF networks. For example, his section 7.9 comparing RBF networks
|> and MLPs is extremely shallow; see "How do MLPs compare with RBFs?" in
|> the Neural Network FAQ, part 2 of 7: Learning, at
|> ftp://ftp.sas.com/pub/neural/FAQ2.html for a much more thorough
|> discussion.

I have received some complaints via email about this post. It was not my
intention to disparage mathematics in any way, and I apologize to any
mathematicians who were offended by my comments. I have great respect
for people who can prove theorems about things such as asymptotic rates
of convergence of statistical estimators, which is something I'm no
good at whatever.

The point I was trying to make was that proving lots of profound
mathematical theorems does not necessarily give you sufficient insight
into practical applications, and Haykin's chapter on RBFs is a
particularly egregious example of this. For example, Haykin overlooks
the crucial distinction between ordinary RBF networks and normalized RBF
networks (and so do lots of other people).  Furthermore, someone who
wants to use an RBF network for some practical application does not need
to understand the details of Haykin's math. I think it is unfortunate
that Haykin gives the impression that RBF networks are much more
abstruse than MLPs.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
