Network pruning and computation caching have proven to be very influential in practical implementations of belief-network inference. In fact, our own experience has shown that these optimizations typically make the difference between a usable and a non-usable belief-network system.

One problem with these optimizations, however, is their algorithm-specific implementations although they are based on general principles (e.g., taking advantage of network topology). Another problem is that they can make elegant algorithms complicated and hard to understand. Moreover, these optimizations are often hard to define succinctly, and hence are not well documented within the community.

In contrast, belief-network inference can be optimized by generating Q-DAGs
using unoptimized inference algorithms, and then optimizing the generated
Q-DAG through reduction techniques. We have shown some examples of this
earlier with respect to pruning and caching optimizations.
However, whether this alternate approach to optimization is always feasible
is yet to be known. A positive answer will clearly provide *an
algorithm-independent approach to optimizing belief-network inference*,
which is practically important for at least two reasons. First, Q-DAG
reduction techniques seem to be much simpler to understand and implement
since they deal with graphically represented arithmetic expressions, without
having to invoke probability or belief network theory. Second, reduction
operations are applicable to Q-DAGs generated by any belief-network algorithm.
Therefore, an optimization approach based on Q-DAG reduction would be more
systematic and accessible to a bigger class of developers.

Darwiche&Provan