Faster online calibration without randomization: interval forecasts and the power of two choices
November 2, 2022 (GHC 8102)

Abstract: We study the problem of making calibrated probabilistic forecasts for a binary sequence generated sequentially by an adversary. It is well-known that deterministic forecasts can be fooled, whereas randomized forecasts can be calibrated at a rate of $O(1/\sqrt{T})$. In our work, we show that this rate is tight.

Our main result is a proposed alternative setup that leads to a surprising improvement to the rate. Namely, we allow the forecaster to make two nearby probabilistic forecasts, or equivalently an interval forecast of small width. This interval forecasting accords the forecaster with significant power––a fast calibration rate of $O(1/T)$ can be achieved even without deploying any randomization.

Talk overview: This will be a whiteboard talk. I will show that the standard calibration setup reduces to internal regret minimization for a certain problem, and sketch a forecaster based on this reduction. Then I will go over our proposed interval forecasting setup, and describe the algorithm that achieves a deterministic $O(1/T)$ rate in this setup.