A density ratio class consists of all probability densities p(A) so that for any event A [DeRobertis & Hartigan1981]:
GammaRl,u(p(A)) = {
A global neighborhood can be constructed with a sub-class of the density ratio class. Take a base Bayesian network and a positive constant k>1. Consider the set of all joint distributions such that for some alpha:
GammaRk(p(x)) =
{{
{ r(A) }/{ r(B) } <= k2 { p(A) }/{ p(B) } .
Since the class is marginalization and conditionalization invariant, the first step is to obtain p(x|e), the marginal posterior for the base distribution of the class. Now we can set up a linear programming problem:
r(x|e) <= k2 { p(x|e) }/{ p(y|e) } r(y|e),
where x, y are arbitrary elements of x (if x has n elements, there are n(n-1) inequalities). This procedure produces the upper bound; the lower bound is obtained by minimization.
For expected value calculations, maximization/minimization of expectation for
u(x) = xq can be easily performed. For calculation
of posterior marginals, take
u(x) = deltaa(xq), where deltaa(xq) is
one if xq = a and zero otherwise. In this case
E[u] = p(xq = a | e), the posterior
probability for xq. The linear programming problem can be
solved in closed-form [Seidenfeld &
Wasserman1993]:
r(xq = a|e) =
{ k p(xq = a|e) }/{ k p(xq = a|e) + p(xq = ac|e) } ,
r(xq = a|e) = { p(xq = a|e) }/{ p(xq = a|e) + k p(xq = ac|e) } .
The linear programming problem above can be intractable if x has too many variables. In this case a Gibbs sampling procedure can be applied to the problem. Consider a sample of the posterior distribution p(x|e) with N elements Xj, which can be produced through Gibbs sampling techniques [York1992]. The following expression converges to the upper expectation of a function u() [Wasserman & Kadane1992]:
Z0 = sumj u(Xj)
Zj = suml >= j u(l)
The value u(l), used here and in the next sections, is the lth value of u(Xj) as the N values are ordered from smallest to largest.
© Fabio Cozman[Send Mail?] Thu Jan 23 15:54:13 EST 1997