next up previous
Next: Total variation class Up: Robustness Analysis of Bayesian Previous: Constant density ratio global

Constant density bounded global neighborhoods

 

A density bounded class is the set of all distributions such that [Lavine1991]

GammaBl,u(p(x)) = { r(x) : l(x) <= r(x) <= u(x) },

where l() and u() are arbitrary non-negative functions such that sumx l(x) <= 1 and sumx u(x) >= 1.

A global neighborhood can be constructed with a sub-class of the density bounded class. Take a base Bayesian network and a positive constant k>1. Consider the set of all joint distributions such that:

GammaBk(p(x)) = { r(x) : (1/k) prodi pi <= r(x) <= k prodi pi } .

Call this class a constant density bounded class. We consider calculation of the upper bound; the lower bound can be obtained through similar methods.

The constant density bounded class is invariant to marginalization, but not to conditionalization [Wasserman & Kadane1992]. To obtain posterior bounds, we must resort to Lavine's algorithm [Cozman1996]. The algorithm brackets the value of the posterior upper expectation of u() by successive calculations of a prior upper expectation:

Ek[u] = max[ sumx is in e {( (u(x) - k) prodi pei } ],

where k is a given real number. The key observation is that the value of Ek[u] is zero only when k is equal to the posterior upper expectation. The algorithm starts at an arbitrary k and increases or decreases it until the expression above is zero.

To obtain Ek[u], we use the marginalization invariant property of the constant density bounded class. First marginalize p(x) to pe(x) with any standard algorithm for Bayesian networks. Now we can set up a linear programming problem:

max{( sumx is in x (u(x) - k) re(x) }

subject to

(1/k) pe(x) <= re(x) <= k pe(x),

where the x are arbitrary elements of x.

For expected value calculations, maximization/minimization of expectation for u(x) = xq can be easily performed. For calculation of posterior, u(x) = deltaa(xq), where deltaa(xq) is one if xq = a and zero otherwise. In this case E[u] = p(xq = a | e), the posterior probability for xq. The linear programming problem can be solved in closed-form [Wasserman1990]:

re(xq = a) = min{( k pe(xq = a), 1 - (1/k) sumxq != a pe(xq) }

re(xq = a) = max{( (1/k) pe(xq = a), 1 - k sumxq != a pe(xq) }.

The linear programming problem above can be intractable if x has too many variables. In this case a Monte Carlo sampling procedure can be applied to the problem [Wasserman & Kadane1992]. Consider a sample of p(x) with N elements Xj. The following expression converges to the upper expectation of a function u():

(1/k) { Z1 }/{ N } + k { Z2 }/{ N }

where

Z1 = suml=1(nk/(k+1)) u(l)

Z2 = suml=(nk/(k+1))+1N u(l).


next up previous
Next: Total variation class Up: Robustness Analysis of Bayesian Previous: Constant density ratio global

© Fabio Cozman[Send Mail?]

Thu Jan 23 15:54:13 EST 1997