Newsgroups: comp.ai.neural-nets,sci.stat.math
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!newsfeed.internetmci.com!in1.uu.net!news.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Cumulative prob. dist
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Dp7nI1.EF7@unx.sas.com>
Date: Tue, 2 Apr 1996 01:15:37 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <315AB235.167EB0E7@cs.rpi.edu> <4jguoa$6ft@thetimes.pixel.kodak.com> <4jlclb$ek3@dfw-ixnews3.ix.netcom.com>
Organization: SAS Institute Inc.
Lines: 38
Xref: glinda.oz.cs.cmu.edu comp.ai.neural-nets:30839 sci.stat.math:9960


In article <4jlclb$ek3@dfw-ixnews3.ix.netcom.com>, jdadson@ix.netcom.com(Jive Dadson ) writes:
|> 
|> How can I generate a cumulative probability distribution using
|> something like a neural network?
|> ...
|> Each training vector consists of a number of predictor variables
|> (on the order of a hundred), and a score. What I want from the
|> model is, given a vector of predictor variables and a hypothetical
|> score, what is the probability that the actual result will at least
|> equal the score? Or it would be okay if the model produced a vector
|> of about 50 outputs, each representing the cpdf value at a fixed
|> score. I could then interpolate if necessary.
|> 
|> I have figured out a rather involved way to get a neural net to
|> yield a probability density function, so I could do that and then
|> integrate. All the ways I've figured out to get it to yield up a
|> cpdf directly leave something to be desired.

Bishop, C.M. (1995). Neural Networks for Pattern Recognition,
Oxford: Oxford University Press. ISBN 0-19-853849-9 (hardback) or
0-19-853864-2 (paperback), discusses ways to get a neural net to
estimate a conditional probability density function. Integrating
a pdf would indeed be the most obvious way of getting a cdf.

I can't think of any good way to estimate a conditional cdf directly
with so many inputs. But you _would_ need to impose range and 
monotonicity constraints on the outputs, which are rather hard to do
with MLPs. Perhaps ALNs would be worth looking into. If Bill Armstrong
doesn't respond to this, look for "Adaptive Logic Network" in parts
5 and 6 of the FAQ.


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
