Recent advances in the study of voting classification algorithms have brought empirical and theoretical results clearly showing the discrimination power of ensemble classifiers. This paper addresses from a theoretical and empirical point of view the question of whether one might have to dispense with interpretability in order to keep classification strength. In order to cope with this problem, we have chosen to study a class of concept representations resembling multilinear discriminant polynomials, adequate for mining issues when dealing with voting procedures, which we define as Decision Committees. Our theoretical results show that striving for simplicity is, like for many other classes of concept representations, a hard computational problem when dealing with DC or other complex voting procedures, and proves the heuristic nature of other results trying to prune adaptive boosting. This paper proposes to adapt a previous scheme to build weak learners, successful for the induction of decision trees and decision lists, to the case of DC. This is an original approach if we refer to the state-of-the-art algorithms building complex votes procedures, usually working in the strong learning framework. Our algorithm, WIDC, relies on recents or new results about partition boosting, ranking loss boosting, and pruning. It comes with two flavors, one with optimistic pruning and one with pessimistic pruning. Both obtained experimentally good results on the simplicity-accuracy tradeoff, but whereas optimistic pruning clearly outperforms other algorithms in the light of the size of the formulas obtained, pessimistic pruning tends to achieve a more reasonable tradeoff, with high accuracies obtained on small formulas. This is all the more interesting as pessimistic pruning is based on a natural and simple pruning procedure.