We consider a set x of discrete variables. Each variable xi has a finite set of values xi. A Bayesian network defines a probability distribution through the expression [Pearl1988]:
where
For any function f(), we use the abbreviation fi for
f(xi|
Suppose a set of variables is associated with evidence e = { x1 = a, x3 = b, ..., xn = k }. Write fe() for a function where the values e are fixed. For example, pe(x) = p(x1 = a, x2, x3 = b, x4, ..., xn = k) for the evidence above. Notice pe(x) is not the conditional p(x|e), but the function p(x) with some variables fixed to e.
Our algorithms assume efficient solutions for two standard problems in usual Bayesian networks: computation of posterior marginals and maximum a posteriori hypothesis. The calculation of posterior marginals for a queried variable can be done by a technique like variable elimination [Zhang & Poole1996] or bucket elimination [Dechter1996], or a more sophisticated clustering method such as peeling [Cannings & Thompson1981] or graph triangulation [Jensen1996].
Another standard problem is the determination of maximum a posteriori hyposthesis (called MAP) [Dechter1996]. Suppose we select a set of variables d as decision variables, and the objective is to find values for the decision variables such that we obtain the maximum value of marginal probability p(d). The notation pdi indicates that the function pi has the decision variables fixed. With this notation, the objective of the MAP algorithm is to obtain:
© Fabio Cozman[Send Mail?] Tue Jan 21 15:59:56 EST 1997