We consider the question of how unlabeled data can be used to estimate the true accuracy of learned classifiers. This is an important question for any autonomous learning system that must estimate its accuracy without supervision, and also when classifiers trained from one data distribution must be applied to a new distribution (e.g., document classifiers trained on one text corpus are to be applied to a second corpus). We show how to accurately estimate error rate from unlabeled data when given a collection of competing classifiers that make independent errors, based on the agreement rates between subsets of these classifiers. We further show that even when the classifiers do not make independent errors, both their accuracies and error dependencies can be estimated in a multitask learning setting under practical assumptions. Experiments on two data sets demonstrate accurate estimates of accuracy from unlabeled data. These results are of practical significance in situations where labeled data is scarce, and shed light on the more general question of how the consistency among multiple functions is related to their true accuracies.
diane [atsymbol] cs ~replace-with-a-dot~ cmu ~replace-with-a-dot~ edu