next up previous
Next: The Algorithm In Detail Up: What Bounds Can Learn Previous: Linear Programming

A Simple Example

Imagine a single neuron, $s_0$, that is separated from the rest of the network by two other neurons, $s_1$ and $s_2$. We want to compute bounds on the marginal $p\left(s_0\!=\!1\right)$. Since we do not know the distribution over the separator nodes, we have to consider all possible distributions as in Equation [*]. Therefore we introduce the free parameters $q\left(s_1s_2\right)$. The goal is now to compute the minimum and maximum of the function $\sum_{s1s2}p\left(s_0\!=\!1 \vert s_1s_2\right)q\left(s_1s_2\right)$. In Figure [*]a we show in three dimensions the space (the large pyramid) in which the distribution $q\left(s_1s_2\right)$ lies. $q\left(00\right)$ is implicitly given by one minus the three other values.

We can add, however, the earlier computed (single node) bounds on $p\left(s_1\right)$ and $p\left(s_2\right)$ to the problem. These restrict the space in Figure [*]a further, since for instance (see also Equation [*])

\begin{displaymath}
q\left(s_1\!=\!1\right)=\sum_{s_2}q\left(s_1\!=\!1,s_2\right)
=q\left(10\right)+q\left(11\right)\le p_+\left(s_1\!=\!1\right)
\end{displaymath} (9)

We have four independent constraints, which are shown in Figure [*]a as planes in the pyramid.

Obviously, by adding this information the space in which $q\left(s_1s_2\right)$ may lie is restricted to that shown in Figure [*]b. In the same figure we have added black lines, which correspond to the planes where the objective function is constant. A standard linear programming tool will immediately return the maximum and the minimum of this function thus bounding the marginal $p\left(s_0\!=\!1\right)$.


next up previous
Next: The Algorithm In Detail Up: What Bounds Can Learn Previous: Linear Programming
Martijn Leisink 2003-08-18