Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!nntp.sei.cmu.edu!cis.ohio-state.edu!math.ohio-state.edu!howland.reston.ans.net!newsjunkie.ans.net!butch!enterprise!news
From: Don Specht <specht@pc-smtp.rdd.lmsc.lockheed.com>
Subject: Re: Origins of GRNN
Message-ID: <1995Aug4.173305.26308@enterprise.rdd.lmsc.lockheed.com>
Sender: news@enterprise.rdd.lmsc.lockheed.com (News Administrator)
Organization: Lockheed Missiles & Space Co.
References: <1995Jul31.174825.26852@enterprise.rdd.lmsc.lockheed.com> <DCM9xw.442@unx.sas.com>
Date: Fri, 4 Aug 95 17:33:05 GMT
Lines: 25

Thanks, Dr. Sarle, for the references.  I have sent away to get copies.
You didn't comment on the other 3/4 of my 1968 report, namely, the 
Volterra polynomial representation.  I did formalize it later 
and published it in the statistical literature:
"Series Estimation of a Probability Density Function," D. F. Specht,
Technometrics, 13, 409-424 (May, 1971).
This is a general paper since nonparametric estimation of pdfs is 
basic to both regression and classification.  The technique of this
paper has both good and bad points.  It provides for one pass learning 
with a fixed number of coefficients that doesn't grow regardless 
of the size of the training database.  However, the number of cross 
products grows factorially with the dimensionality of the problem.  
It is therefore useful for large databases with small dimensionality.  
With pruning of high-order cross products, it is useful also for 
larger dimensionality.
Dr. Sarle, since you are evidentally interested in the history of 
statistical techniques, I would like to ask you a favor.  Can you
tell me if this pdf estimation technique has been independently
invented either before or after my 1968 report or 1971 paper?  If
so, I will reference them in the future.  I am sending you a copy
of the paper by regular mail.

   

 
