next up previous
Next: Constant density bounded global Up: Robustness Analysis of Bayesian Previous: epsi-contaminated global neighborhoods

Constant density ratio global neighborhoods

 

A density ratio class consists of all probability densities p(A) so that for any event A [DeRobertis & Hartigan1981]:

GammaRl,u(p(A)) = { r(A) : l(A) <= alpha r(A) <= u(A) },

where l(A) and u(A) are arbitrary positive measures such that l() <= u() and alpha is some positive real number.

A global neighborhood can be constructed with a sub-class of the density ratio class. Take a base Bayesian network and a positive constant k>1. Consider the set of all joint distributions such that for some alpha:

GammaRk(p(x)) = {{ r(x) : (1/k) prodi pi <= alphar(x) <= k prodi pi } .

Call this class a constant density ratio class. This class is invariant to marginalization and application Bayes rule; in fact it is the only class that has these two properties [Wasserman1992a]. Another way to characterize this class is to define it as the set of all distributions r() that obey the inequalities (valid for all events A and B):

{ r(A) }/{ r(B) } <= k2 { p(A) }/{ p(B) } .

Since the class is marginalization and conditionalization invariant, the first step is to obtain p(x|e), the marginal posterior for the base distribution of the class. Now we can set up a linear programming problem:

max( sumx is in x u(x) r(x|e) )

subject to

r(x|e) <= k2 { p(x|e) }/{ p(y|e) } r(y|e),

where x, y are arbitrary elements of x (if x has n elements, there are n(n-1) inequalities). This procedure produces the upper bound; the lower bound is obtained by minimization.

For expected value calculations, maximization/minimization of expectation for u(x) = xq can be easily performed. For calculation of posterior marginals, take u(x) = deltaa(xq), where deltaa(xq) is one if xq = a and zero otherwise. In this case E[u] = p(xq = a | e), the posterior probability for xq. The linear programming problem can be solved in closed-form [Seidenfeld & Wasserman1993]:

r(xq = a|e) = { k p(xq = a|e) }/{ k p(xq = a|e) + p(xq = ac|e) } ,

r(xq = a|e) = { p(xq = a|e) }/{ p(xq = a|e) + k p(xq = ac|e) } .

The linear programming problem above can be intractable if x has too many variables. In this case a Gibbs sampling procedure can be applied to the problem. Consider a sample of the posterior distribution p(x|e) with N elements Xj, which can be produced through Gibbs sampling techniques [York1992]. The following expression converges to the upper expectation of a function u() [Wasserman & Kadane1992]:

maxj ( { 1 }/{ 1 + (1-(j/N))(k-1) } {( { (k-1) Zj }/{ N } + { Z0 }/{ N } } ).

where

Z0 = sumj u(Xj)

Zj = suml >= j u(l)

The value u(l), used here and in the next sections, is the lth value of u(Xj) as the N values are ordered from smallest to largest.


next up previous
Next: Constant density bounded global Up: Robustness Analysis of Bayesian Previous: epsi-contaminated global neighborhoods

© Fabio Cozman[Send Mail?]

Thu Jan 23 15:54:13 EST 1997