For simplicity, this paper concentrated on inferences. Decision making involves the calculation of expected utilities rather than inferences, given a utility function u(). A set of decision variables d is selected and the objective is to obtain bounds on expected utility for fixed values of d. If u(xq) = xq with no decision variables, the result of expected utility calculations is the expected value of xq.
We can extend the framework presented in this paper to utility functions. The exact algorithms extend readily: instead of calculating posterior marginals, calculate the solution of the MEU problem. Gradient-based search can also be effortlessly adapted to utilities; the QEM convergence proof may not apply however if u() is negative. Lavine's algorithm, given as expression (6), is valid for any utility function u(x).
Another useful measure for probabilistic inference is the variance of a variable xq, defined as Vp[xq] = E[xq2] - (E[xq])2 for a given probability distribution p(xq). Define the lower and upper variance respectively as:
To produce a convergent algorithm for calculation of lower and upper
variances, we can use Walley's variance envelope theorem
[Walley1991, Theorem G2,], which demonstrates that
V[xq] = E[(xq - mu)2] ) .
The calculation of lower and upper variances becomes a unidimensional
optimization problem, which can be solved by discretizing mu
(note that mu must be larger than zero and smaller than the square
of the largest value of xq).
The computational burden of this procedure is very intense
since for each value of mu it is necessary to obtain the bounds
for expected value of u(xq) = (xq - mu)2.
Tue Jan 21 15:59:56 EST 1997