next up previous
Next: Real World Networks Up: A Toy Problem Previous: A Toy Problem

Fixed Points and Convergence

Figure: The result for a Boltzmann machine ring of an arbitrary number of nodes. All thresholds and weights have the same value. In the left panel the gap is shown between the upper and lower bound of the mean of the nodes. The algorithm converges exponentially to the final bounding values as soon as it is close enough. Thus the distance to its asymptotic value is proportional to $\alpha^n$, where $n$ is the number of iterations. The parameter $\alpha$ is shown in the right panel.
\end{tabular}\par\par }

It is possible to derive the fixed point of the bound propagation algorithm analytically for a Boltzmann ring if we take a single value for all weights ($w$) and one for all thresholds ($\theta$). Due to this symmetry all single node marginals are equal in such a network. Moreover all upper and lower bounds on the single nodes should be identical. This implies that for the fixed point the following holds for any $i$:

...eft(s_i\vert s_{i-1}s_{i+1}\right)q\left(s_{i-1}s_{i+1}\right)
\end{displaymath} (12)

under the constraints
q\left(s_{i-1}\right)&\le p_+\left(s_{i-1}\right)=p_+\left(s_i\rig...
...\left(s_{i+1}\right)&\ge p_-\left(s_{i+1}\right)=p_-\left(s_i\right)
and similarly for the lower bound. The conditional probability in Equation [*] is given by
p\left(s_i\vert s_{i-1}s_{i+1}\right)\propto\exp\left(\theta s_i+ws_{i-1}s_i+ws_is_{i+1}\right)
\end{displaymath} (13)

From these equations one can derive the fixed point for $p_+\left(s_i\right)$ and $p_-\left(s_i\right)$. It turns out, however, that it is easier to determine the fixed point of the upper and lower bound on the mean, hence $p_+\left(s_i=1\right)-p_-\left(s_i=-1\right)$ and $p_-\left(s_i=1\right)-p_+\left(s_i=-1\right)$. But this has no effect on the principle results as tightness and convergence speed.

In Figure [*]a the difference between the upper and lower bound on the means is shown for various choices of the weight and threshold. As we saw before tight bounds can be obtained for small weights and somewhat larger, but positive weights, whereas negative weights result in rather poor, but still non-trivial bounds. It should be mentioned, however, that these results correspond to the choice of the smallest clusters ( $\left\vert S_\mathrm{\scriptscriptstyle mar}\right\vert=1$ and $\left\vert S_\mathrm{\scriptscriptstyle sep}\right\vert=2$) for the bound propagation algorithm. The bounds can easily be improved by enlarging the clusters as we saw in Figure [*].

Close to the fixed point the bound propagation algorithm converges exponentially. The distance to the asymptotic value can be written as $\alpha^n$, where $n$ is the number of iterations and $\alpha$ is a number between zero and one indicating the convergence speed. The closer to one $\alpha$ is, the slower the algorithm converges. In Figure [*]b $\alpha$ is shown for the same weights and thresholds. It is clear that larger weights induce a slower convergence. That is what we see in general: probabilities that are close to deterministic slow down the convergence.

next up previous
Next: Real World Networks Up: A Toy Problem Previous: A Toy Problem
Martijn Leisink 2003-08-18