Pessimistic pruning builds a sequence of DC from the initial one. At each step, we remove one rule, such that its removal brings the lowest error among all possible removals of rule in the current DC. Each time the error of the current DC is not greater than the lowest error found already, Criterion(.) returns true for all rules already tested for removal. This pruning returns the smallest DC having the lowest error of the sequence. This pruning is rather natural (and simple), and motivated by the fact that the induction of the large DC before pruning does not lead to a conventional error minimization. Such a property is rather seldom in ``top-down and prune'' induction algorithms. For example, common decision tree induction algorithms in this scheme incorporate very sophisticated pruning criteria (CART [Breiman et al.1984], C4.5 [Quinlan1994]).