The total variation class is defined as the set of distributions such that:
GammaTk(p(x)) =
{
Almost all the discussion for the constant density bounded class carries to the total variation class since both classes are invariant to marginalization but not to conditionalization. We perform calculation of the upper bound through Lavine's algorithm. First marginalize p(x) to pe(x) with any standard algorithm for Bayesian networks. Now we can set up a linear programming problem:
pe(A) - epsi <= re(A) <= p(A) + epsi,
where the A are arbitrary elements of x.
For expected value calculations, maximization/minimization of expectation for
u(x) = xq the can be easily performed. For calculation
of posterior, u(x) = deltaa(xq), where deltaa(xq) is
one if xq = a and zero otherwise. In this case
E[u] = p(xq = a | e), the posterior
probability for xq. The linear programming problem can be
solved in closed-form [Wasserman1990]:
re(xq = a) =
re(xq = a) =
The linear programming problem above can be intractable if x has too many variables. In this case a Monte Carlo sampling procedure can be applied to the problem [Wasserman & Kadane1992]. Consider a sample of p(x) with N elements Xj. The following expression converges to the upper expectation of a function u():
epsi
Z0 = suml = epsiNN u(l).
© Fabio Cozman[Send Mail?] Thu Jan 23 15:54:13 EST 1997