next up previous
Next: 1. Introduction

Journal of Artificial Intelligence Research 11 (1999), pp. 169-198. Submitted 1/99; published 8/99.
© 1999 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.

Popular Ensemble Methods: An Empirical Study

David Opitz
Department of Computer Science
University of Montana
Missoula, MT 59812 USA
opitz@cs.umt.edu

Richard Maclin
maclin@d.umn.edu
Computer Science Department
University of Minnesota
Duluth, MN 55812 USA
rmaclin@d.umn.edu

Abstract:

An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging [Breiman1996a] and Boosting [Freund Schapire1996,Schapire1990] are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier - especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble's performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees.



 
next up previous
Next: 1. Introduction
David Opitz
1999-08-24