The total variation class is defined as the set of distributions such that:

Gamma^{T}_{k}(p(x)) =
{

Almost all the discussion for the constant density bounded class
carries to the total variation class since both classes are invariant to
marginalization but not to conditionalization.
We perform calculation of the upper bound through
Lavine's algorithm. First marginalize
p(x) to p^{e}(x) with any standard algorithm for
Bayesian networks.
Now we can set up a linear programming problem:

_{x is in x} (u(x) - k) r^{e}(x)

p^{e}(A) - epsi <= r^{e}(A) <= p(A) + epsi,

For expected value calculations, maximization/minimization of expectation for
u(x) = x_{q} the can be easily performed. For calculation
of posterior, u(x) = delta_{a}(x_{q}), where delta_{a}(x_{q}) is
one if x_{q} = a and zero otherwise. In this case
~~E~~[u] = ~~p~~(x_{q} = a | e), the posterior
probability for x_{q}. The linear programming problem can be
solved in closed-form [Wasserman1990]:

~~r~~^{e}(x_{q} = a) = ^{e}(x_{q} = a) + epsi

__r__^{e}(x_{q} = a) = ^{e}(x_{q} = a) - epsi

The linear programming problem above can be intractable if x
has too many variables. In this case a Monte Carlo sampling procedure can
be applied to the problem [Wasserman & Kadane1992]. Consider a sample of
p(x) with N elements X_{j}. The following expression converges
to the upper expectation of a function u():

epsi^{e}(x) + { Z_{0} }/{ N } ,

Z_{0} = sum_{l = epsiN}^{N} u_{(l)}.

© Fabio Cozman[Send Mail?]

Thu Jan 23 15:54:13 EST 1997