A density bounded class is the set of all distributions such that [Lavine1991]

Gamma^{B}_{l,u}(p(x)) = {

A global neighborhood can be constructed with a sub-class of the density bounded class. Take a base Bayesian network and a positive constant k>1. Consider the set of all joint distributions such that:

Gamma^{B}_{k}(p(x)) =
{_{i} p_{i} <= r(x) <= k prod_{i} p_{i}

The constant density bounded class is
invariant to marginalization, but not to conditionalization [Wasserman & Kadane1992].
To obtain posterior bounds, we must resort to Lavine's algorithm
[Cozman1996]. The algorithm brackets the
value of the posterior upper expectation of u() by successive
calculations of a *prior* upper expectation:

~~E~~_{k}[u] = _{x is in e}
{(_{i} p^{e}_{i}

To obtain ~~E~~_{k}[u], we use the marginalization invariant
property of the constant density bounded class. First marginalize
p(x) to p^{e}(x) with any standard algorithm for
Bayesian networks.
Now we can set up a linear programming problem:

_{x is in x} (u(x) - k) r^{e}(x)

(1/k) p^{e}(x) <= r^{e}(x) <= k p^{e}(x),

For expected value calculations, maximization/minimization of expectation for
u(x) = x_{q} can be easily performed. For calculation
of posterior, u(x) = delta_{a}(x_{q}), where delta_{a}(x_{q}) is
one if x_{q} = a and zero otherwise. In this case
~~E~~[u] = ~~p~~(x_{q} = a | e), the posterior
probability for x_{q}. The linear programming problem can be
solved in closed-form [Wasserman1990]:

~~
r~~^{e}(x_{q} = a) =
^{e}(x_{q} = a),
1 - (1/k) sum_{xq != a}
p^{e}(x_{q})

__r__^{e}(x_{q} = a) =
^{e}(x_{q} = a),
1 - k sum_{xq != a}
p^{e}(x_{q})

The linear programming problem above can be intractable if x
has too many variables. In this case a Monte Carlo sampling procedure can
be applied to the problem [Wasserman & Kadane1992]. Consider a sample of
p(x) with N elements X_{j}. The following expression converges
to the upper expectation of a function u():

(1/k) { Z_{1} }/{ N } + k { Z_{2} }/{ N }

Z_{1} = sum_{l=1}^{(nk/(k+1))} u_{(l)}

Z_{2} = sum_{l=(nk/(k+1))+1}^{N} u_{(l)}.

Thu Jan 23 15:54:13 EST 1997