Next: The Algorithm In Detail Up: What Bounds Can Learn Previous: Linear Programming

## A Simple Example

Imagine a single neuron, , that is separated from the rest of the network by two other neurons, and . We want to compute bounds on the marginal . Since we do not know the distribution over the separator nodes, we have to consider all possible distributions as in Equation . Therefore we introduce the free parameters . The goal is now to compute the minimum and maximum of the function . In Figure a we show in three dimensions the space (the large pyramid) in which the distribution lies. is implicitly given by one minus the three other values.

We can add, however, the earlier computed (single node) bounds on and to the problem. These restrict the space in Figure a further, since for instance (see also Equation )

 (9)

We have four independent constraints, which are shown in Figure a as planes in the pyramid.

Obviously, by adding this information the space in which may lie is restricted to that shown in Figure b. In the same figure we have added black lines, which correspond to the planes where the objective function is constant. A standard linear programming tool will immediately return the maximum and the minimum of this function thus bounding the marginal .

Next: The Algorithm In Detail Up: What Bounds Can Learn Previous: Linear Programming
Martijn Leisink 2003-08-18