An epsi-contaminated class is characterized by a distribution p() and a real number epsi is in (0,1) [Berger1985]:

Gamma^{C}_{epsi}(p(x)) = {{

Gamma^{C}_{epsi}(p(x)) = {{_{i} p_{i} + epsiq(x)

The posterior expected value for u(x) is:

E[u] = { U(e) }/{ p(e) }

where:
U(e) = sum_{x is in e} u^{e}(x) p^{e}(x)

p(e) = sum_{x is in e} p^{e}(x)

An epsi-contaminated class is a finitely generated convex set of distributions [Cozman1996]. The vertices of this set are unitary point masses on each one of the possible configurations of the network. The maximum and minimum expected values for u(x) occur in these vertices, since we are optimizing a linear function over a convex set.

The upper expectation is:

~~E~~[u] =
~~u~~^{e} }/{ (1-epsi) p(e) + epsi }

The same reasoning leads to the lower expectation:

__E__[u] =
__u__^{e} }/{ (1-epsi) p(e) + epsi }

Some special cases are important. When u(x) = x_{q}, then
~~E~~[u] = ~~E~~[x_{q}], the upper expected
value of variable x_{q}. The most important special case is
u(x) = delta_{a}(x_{q}), where
delta_{a}(x_{q}) is one if x_{q} = a and zero otherwise.
In this case
~~E~~[u] = ~~p~~(x_{q} = a | e), the posterior
probability for x_{q}.

The posterior probability bounds are obtained from the previous expressions through simple substitutions:

~~p~~(x_{q} = a | e) =
{ (1-epsi) p(x_{q} = a, e) + epsi }/{ (1-epsi) p(e) + epsi } ,

__p__(x_{q} = a | e) =
{ (1-epsi) p(x_{q} = a, e) }/{ (1-epsi) p(e) + epsi } ,

p(x_{q} = a, e) =
sum_{x} is in {x_{q}, e} p^{{ xq = a, e}}(x).

Thu Jan 23 15:54:13 EST 1997