Next: Quasi-Bayesian theory Up: Robustness Analysis of Bayesian Previous: Motivation

# Background

We consider a set x of discrete variables. Each variable xi has a finite set of values xi. A Bayesian network defines a probability distribution through the expression [Pearl1988]:

p(x) = prodi p(xi | pa(xi)),

where pa(xi) is a set of variables, the parents of variable xi. p() refers to a single probability distribution; q() refers to an arbitrary distribution and r() refers to an arbitrary member of a set of distributions.

For any function f(), we use the abbreviation fi for f(xi|pa(xi)) and fi(a) for f(xi = a | pa(xi)). In this notation, expression (1) can be rewritten as p(x) = prodi pi.

Suppose a set of variables is associated with evidence e = { x1 = a, x3 = b, ..., xn = k }. Write fe() for a function where the values e are fixed. For example, pe(x) = p(x1 = a, x2, x3 = b, x4, ..., xn = k) for the evidence above. Notice pe(x) is not the conditional p(x|e), but the function p(x) with some variables fixed to e.

Our algorithms assume efficient solutions for two standard problems in usual Bayesian networks: computation of posterior marginals and maximum a posteriori hypothesis. The calculation of posterior marginals for a queried variable can be done by a technique like variable elimination [Zhang & Poole1996] or bucket elimination [Dechter1996], or a more sophisticated clustering method such as peeling [Cannings & Thompson1981] or graph triangulation [Jensen1996].

Another standard problem is the determination of maximum a posteriori hyposthesis (called MAP) [Dechter1996]. Suppose we select a set of variables d as decision variables, and the objective is to find values for the decision variables such that we obtain the maximum value of marginal probability p(d). The notation pdi indicates that the function pi has the decision variables fixed. With this notation, the objective of the MAP algorithm is to obtain:

maxd sumx is in d ( prodi pdi ).

There are algorithms which attack this problem in the most general form [Dechter1996]. A class of algorithms attack this problem when decision problem can be represented as an influence diagram [Jensen & Jensen1994].

Next: Quasi-Bayesian theory Up: Robustness Analysis of Bayesian Previous: Motivation