For simplicity, this paper concentrated on inferences.
Decision making involves the calculation of expected utilities rather
than inferences, given a *utility* function u().
A set of *decision* variables d is selected and the objective is to
obtain bounds on expected utility for fixed values of d.
If u(x_{q}) = x_{q} with no decision variables, the result of expected
utility calculations is the expected value of x_{q}.

We can extend the framework presented in this paper to utility functions. The exact algorithms extend readily: instead of calculating posterior marginals, calculate the solution of the MEU problem. Gradient-based search can also be effortlessly adapted to utilities; the QEM convergence proof may not apply however if u() is negative. Lavine's algorithm, given as expression (6), is valid for any utility function u(x).

Another useful measure for probabilistic inference is the variance
of a variable x_{q}, defined as V_{p}[x_{q}] = E[x_{q}^{2}] - (E[x_{q}])^{2} for
a given probability distribution p(x_{q}).
Define the lower and upper variance respectively as:

__V__[x_{q}] = _{p} V_{p}[x_{q}]

~~V~~[x_{q}] = _{p} V_{p}[x_{q}]

To produce a convergent algorithm for calculation of lower and upper
variances, we can use Walley's variance envelope theorem
[Walley1991, Theorem G2,], which demonstrates that
__V__[x_{q}] = _{mu} ( __E__[(x_{q} - mu)^{2}] )
and ~~V~~[x_{q}] = _{mu} ( ~~E~~[(x_{q} - mu)^{2}] ) .
The calculation of lower and upper variances becomes a unidimensional
optimization problem, which can be solved by discretizing mu
(note that mu must be larger than zero and smaller than the square
of the largest value of x_{q}).
The computational burden of this procedure is very intense
since for each value of mu it is necessary to obtain the bounds
for expected value of u(x_{q}) = (x_{q} - mu)^{2}.

Tue Jan 21 15:59:56 EST 1997