next up previous
Next: TYPE-1 JOINT CREDAL SETS Up: Robustness analysis of Bayesian Previous: COMBINING LOCAL INFORMATION

JOINT CREDAL SETS BY NATURAL EXTENSION

Robust inference in a Quasi-Bayesian network involves the solution of the following minimization for a queried variable xq:

  p(xq = a|e) = min [sumx not in (xq, e) pe(x)/ sumx not in e pe(x)].

The constraints imposed on this minimization are all linear since the local credal sets are polytopes. There are two types of constraints. First, there are linear constraints on the priors:

A p(xi) <= b,

where A and b are suitable matrices. Second, there are linear constraints on the conditionals:

C p(xi|pa(xi)) <= d ---> C [ p(xi, pa(xi))/p(pa(xi)) ] <= d,

which implies that the constraints on the conditionals are:

C p(xi, pa(xi)) - d p(pa(xi)) <= 0.

Take the problem above to be embedded in the space of all possible terms in the joint distribution. In this space, the minimization problem is a linear fractional program: the minimization of the ratio of two linear functions subject to linear constraints [32]. To write down the program, we must run a cluster-type of Bayesian inference in the network [21], leaving all nodes associated with credal sets in a single cluster. The probability values in this cluster will be the coefficients in the linear fractional program.

The advantage of approaching natural extension from this point of view is that any linear fractional program can be converted to a linear program through a simple transformation [32], for which standard algorithms exist. We are able to produce an exact robust inference through this approach.

The disadvantage of this approach is that the generation of coefficients in the cluster-based network propagation step may be impractical for Quasi-Bayesian networks with a large number of credal sets. The number of terms in the joint distribution is exponential with the number of variables. For a network with s credal sets, each associated with a node with m values, there are ms variables in the linear program. On the other hand, the number of constraints may be much smaller. Suppose there are M constraints for each node; there are M s constraints in the linear program. This asymmetry suggests a resort to the dual linear program, in which there will be M s variables and ms constraints [1]. Even though the complexity is still the same for exact solutions, now we can use recent results in the field of linear programming [9], which indicate that problems with large numbers of constraints (compared to the number of variables) can be efficiently solved by discarding many redundant constraints. These techniques provide the solution for robust inferences with natural extension.


next up previous
Next: TYPE-1 JOINT CREDAL SETS Up: Robustness analysis of Bayesian Previous: COMBINING LOCAL INFORMATION

© Fabio Cozman[Send Mail?]

Fri May 30 15:55:18 EDT 1997