A density bounded class is the set of all distributions such that [Lavine1991]
GammaBl,u(p(x)) = {
A global neighborhood can be constructed with a sub-class of the density bounded class. Take a base Bayesian network and a positive constant k>1. Consider the set of all joint distributions such that:
GammaBk(p(x)) =
{
The constant density bounded class is invariant to marginalization, but not to conditionalization [Wasserman & Kadane1992]. To obtain posterior bounds, we must resort to Lavine's algorithm [Cozman1996]. The algorithm brackets the value of the posterior upper expectation of u() by successive calculations of a prior upper expectation:
Ek[u] =
To obtain Ek[u], we use the marginalization invariant
property of the constant density bounded class. First marginalize
p(x) to pe(x) with any standard algorithm for
Bayesian networks.
Now we can set up a linear programming problem:
(1/k) pe(x) <= re(x) <= k pe(x),
where the x are arbitrary elements of x.
For expected value calculations, maximization/minimization of expectation for
u(x) = xq can be easily performed. For calculation
of posterior, u(x) = deltaa(xq), where deltaa(xq) is
one if xq = a and zero otherwise. In this case
E[u] = p(xq = a | e), the posterior
probability for xq. The linear programming problem can be
solved in closed-form [Wasserman1990]:
re(xq = a) =
re(xq = a) =
The linear programming problem above can be intractable if x has too many variables. In this case a Monte Carlo sampling procedure can be applied to the problem [Wasserman & Kadane1992]. Consider a sample of p(x) with N elements Xj. The following expression converges to the upper expectation of a function u():
(1/k) { Z1 }/{ N } + k { Z2 }/{ N }
whereZ1 = suml=1(nk/(k+1)) u(l)
Z2 = suml=(nk/(k+1))+1N u(l).
© Fabio Cozman[Send Mail?] Thu Jan 23 15:54:13 EST 1997