Robust inference in a Quasi-Bayesian network involves the solution of
the following minimization for a queried variable x_{q}:

__p__(x_{q} = a|e) =
_{x not in (xq, e)} p^{e}(x)/
sum_{x not in e} p^{e}(x)].

The constraints imposed on this minimization are all linear since the local credal sets are polytopes. There are two types of constraints. First, there are linear constraints on the priors:

A p(x_{i}) <= b,

C p(x_{i}|_{i})) <= d --->
C [ p(x_{i}, _{i}))/p(_{i})) ] <= d,

C p(x_{i}, _{i})) - d p(_{i})) <= 0.

Take the problem above to be embedded in the space of
all possible terms in the joint distribution. In this space, the
minimization problem is a *linear fractional program*: the minimization
of the ratio of two linear functions subject to linear constraints
[32]. To write down the program, we must run a
cluster-type of Bayesian inference in the network [21],
leaving all nodes associated with credal sets in a single cluster.
The probability values in this cluster will be the coefficients
in the linear fractional program.

The advantage of approaching natural extension from this point of view is that any linear fractional program can be converted to a linear program through a simple transformation [32], for which standard algorithms exist. We are able to produce an exact robust inference through this approach.

The disadvantage of this approach is that the generation of coefficients
in the cluster-based network propagation step may be impractical for
Quasi-Bayesian networks with a large number of credal sets. The number
of terms in the joint distribution is exponential with the number of
variables. For a network with s credal sets, each associated with
a node with m values, there are m^{s} variables in the linear program.
On the other hand, the number of constraints may be much smaller.
Suppose there are M constraints for each node; there are M s
constraints in the linear program.
This asymmetry suggests a resort to the dual linear program, in which
there will be M s variables and m^{s} constraints [1].
Even though the complexity is still the same for exact solutions, now
we can use recent results in the field of linear programming [9],
which indicate that problems with large numbers of constraints (compared
to the number of variables) can be efficiently solved by discarding
many redundant constraints. These techniques provide the solution
for robust inferences with natural extension.

Fri May 30 15:55:18 EDT 1997