next up previous
Next: Gradient-based techniques Up: TYPE-1 JOINT CREDAL SETS Previous: EXACT ROBUST INFERENCES



In this subsection we recast the robust inference problem as a parameter estimation problem. Consider a transformed Bayesian network with transparent variables z'i. Each transparent variable has values 1, 2, ..., |z'i|;. Suppose z'i is a random variable with distribution thetaij = p(z'i = j). Call Theta the vector of all thetaij.

Suppose xq is queried; the objective is to find:

p(xq = a | e) = maxTheta [p(xq = a, e)/p(e)]

Notice that the optimization procedure has to be repeated for each of the values of the queried variable.

To solve the robust inference problem, we must maximize the posterior log-likelihood for Theta:

L(Theta) = log [p(xq = a, e)/p(e)] = logp(xq = a, e) - logp(e).

This problem is similar to the problem of learning Bayesian network parameters Theta given data e. We are then lead to propose algorithms for robust inference that are based on the literature of learning Bayesian networks. Several interior-point algorithms exist in this category; here we present a few techniques properly adapted for robust inferences.

© Fabio Cozman[Send Mail?]

Fri May 30 15:55:18 EDT 1997