In this paper, we have presented a re-evaluation for windowing with separate-and-conquer rule learning algorithms. For this type of algorithm, significant gains in computational efficiency are possible without a loss in terms of predictive accuracy. We explain this with the fact that separate-and-conquer rule learning algorithms learn each rule independently, whereas an attribute chosen by a divide-and-conquer decision tree learning algorithm will be part of all rules that are represented by the subtree below this attribute. Based on this finding, we have further demonstrated a more flexible technique for integrating windowing into rule learning algorithms. Good rules are immediately added to the final theory and the covered examples are removed from the window. This avoids re-learning these rules in all subsequent iterations of the windowing process thus reducing the complexity of the learning problem. While most of our results have been obtained in noise-free domains, we believe that the idea of integrative windowing can be generalized for attacking the problem of noise in windowing. To that end, we have outlined three basic problems that have to be solved. A first implementation of straightforward solutions to these problems has achieved promising results in a simple domain with artificial noise.