An epsi-contaminated class is characterized by a distribution p() and a real number epsi is in (0,1) [Berger1985]:
GammaCepsi(p(x)) = {{
GammaCepsi(p(x)) = {{
The posterior expected value for u(x) is:
E[u] = { U(e) }/{ p(e) }
where:U(e) = sumx is in e ue(x) pe(x)
p(e) = sumx is in e pe(x)
An epsi-contaminated class is a finitely generated convex set of distributions [Cozman1996]. The vertices of this set are unitary point masses on each one of the possible configurations of the network. The maximum and minimum expected values for u(x) occur in these vertices, since we are optimizing a linear function over a convex set.
The upper expectation is:
E[u] =
ue }/{ (1-epsi) p(e) + epsi }
The same reasoning leads to the lower expectation:
E[u] =
Some special cases are important. When u(x) = xq, then
E[u] = E[xq], the upper expected
value of variable xq. The most important special case is
u(x) = deltaa(xq), where
deltaa(xq) is one if xq = a and zero otherwise.
In this case
E[u] = p(xq = a | e), the posterior
probability for xq.
The posterior probability bounds are obtained from the previous expressions through simple substitutions:
p(xq = a | e) =
{ (1-epsi) p(xq = a, e) + epsi }/{ (1-epsi) p(e) + epsi } ,
p(xq = a | e) = { (1-epsi) p(xq = a, e) }/{ (1-epsi) p(e) + epsi } ,
wherep(xq = a, e) = sumx is in {xq, e} p{ xq = a, e}(x).
Notice that standard Bayesian network algorithms can be used to produce the elements of p(xq, e) which are required in this expression [Cannings & Thompson1981, Dechter1996, Zhang & Poole1996].
© Fabio Cozman[Send Mail?] Thu Jan 23 15:54:13 EST 1997