next up previous
Next: Conclusion Up: Robustness Analysis of Bayesian Previous: Classes of finitely generated

Expected utility and variance

 

For simplicity, this paper concentrated on inferences. Decision making involves the calculation of expected utilities rather than inferences, given a utility function u(). A set of decision variables d is selected and the objective is to obtain bounds on expected utility for fixed values of d. If u(xq) = xq with no decision variables, the result of expected utility calculations is the expected value of xq.

We can extend the framework presented in this paper to utility functions. The exact algorithms extend readily: instead of calculating posterior marginals, calculate the solution of the MEU problem. Gradient-based search can also be effortlessly adapted to utilities; the QEM convergence proof may not apply however if u() is negative. Lavine's algorithm, given as expression (6), is valid for any utility function u(x).

Another useful measure for probabilistic inference is the variance of a variable xq, defined as Vp[xq] = E[xq2] - (E[xq])2 for a given probability distribution p(xq). Define the lower and upper variance respectively as:

V[xq] = minp Vp[xq]

V[xq] = maxp Vp[xq]

Calculation of bounds for variances in a Quasi-Bayesian network is a great challenge because the expression for Vp[xq] is quadratic on p(xq). Approximations for the lower and upper variances can be made through generic optimization algorithms, such as gradient descent or simulated annealing, but convergence properties are lost.

To produce a convergent algorithm for calculation of lower and upper variances, we can use Walley's variance envelope theorem [Walley1991, Theorem G2,], which demonstrates that V[xq] = minmu ( E[(xq - mu)2] ) and V[xq] = minmu ( E[(xq - mu)2] ) . The calculation of lower and upper variances becomes a unidimensional optimization problem, which can be solved by discretizing mu (note that mu must be larger than zero and smaller than the square of the largest value of xq). The computational burden of this procedure is very intense since for each value of mu it is necessary to obtain the bounds for expected value of u(xq) = (xq - mu)2.


next up previous
Next: Conclusion Up: Robustness Analysis of Bayesian Previous: Classes of finitely generated

© Fabio Cozman[Send Mail?]

Tue Jan 21 15:59:56 EST 1997