next up previous
Next: Calculation of variance Up: Robustness Analysis of Bayesian Previous: Constant density bounded global

Total variation class

 

The total variation class is defined as the set of distributions such that:

GammaTk(p(x)) = { r(x) : forall A, {| p(A) - r(A) } <= epsi}.

Almost all the discussion for the constant density bounded class carries to the total variation class since both classes are invariant to marginalization but not to conditionalization. We perform calculation of the upper bound through Lavine's algorithm. First marginalize p(x) to pe(x) with any standard algorithm for Bayesian networks. Now we can set up a linear programming problem:

max( sumx is in x (u(x) - k) re(x) )

subject to

pe(A) - epsi <= re(A) <= p(A) + epsi,

where the A are arbitrary elements of x.

For expected value calculations, maximization/minimization of expectation for u(x) = xq the can be easily performed. For calculation of posterior, u(x) = deltaa(xq), where deltaa(xq) is one if xq = a and zero otherwise. In this case E[u] = p(xq = a | e), the posterior probability for xq. The linear programming problem can be solved in closed-form [Wasserman1990]:

re(xq = a) = min{( 1, pe(xq = a) + epsi}

re(xq = a) = max{( 0, pe(xq = a) - epsi}.

The linear programming problem above can be intractable if x has too many variables. In this case a Monte Carlo sampling procedure can be applied to the problem [Wasserman & Kadane1992]. Consider a sample of p(x) with N elements Xj. The following expression converges to the upper expectation of a function u():

epsimaxue(x) + { Z0 }/{ N } ,

where

Z0 = suml = epsiNN u(l).



© Fabio Cozman[Send Mail?]

Thu Jan 23 15:54:13 EST 1997