next up previous
Next: References Up: Learning Convex Sets of Previous: Comparison with subjective learning

Conclusion

This paper advances a frequentist framework based on convex sets of probability distributions. From a sequence of outcomes generated by repetitive experiments, we are able to learn meaningful convex sets of probability distributions from the data. This learning is accomplished using estimators that examine relative frequencies over a finite collection of subsequences of the data. The estimators are guaranteed in a strong sense (i.e., with asymptotic certainty) to dominate the convex set of distributions that generated the data. Our theorems also demonstrate that any estimator based on a finite collection of subsequences can always be improved.

The work started by Walley and Fine and extended in this paper opens several important doors for advocates of belief representations based on convex sets of distributions. First, it demonstrates that these representations can actually be learned from observed data. Second, and perhaps most importantly, is that the connection to observed outcomes addresses what has been a critical weakness of these convex set representations. Bayesians have had the philosophical upper hand primarily because of the connection between probability and observed frequency. Among other things, this connection implies that it is possible to detect when a Bayesian degree of belief is or is not properly calibrated. No such notion has previously been possible for convex set representations of belief. We now know that the connection of subjective probability to observed frequencies is not exclusive property of the Bayesian interpretation, but can indeed be enjoyed by belief frameworks based on credal sets as well.



© Fabio Cozman[Send Mail?]

Sun Jun 29 22:16:40 EDT 1997