all AI news
Faster online calibration without randomization: interval forecasts and the power of two choices. (arXiv:2204.13087v2 [cs.LG] UPDATED)
July 28, 2022, 1:11 a.m. | Chirag Gupta, Aaditya Ramdas
stat.ML updates on arXiv.org arxiv.org
We study the problem of making calibrated probabilistic forecasts for a
binary sequence generated by an adversarial nature. Following the seminal paper
of Foster and Vohra (1998), nature is often modeled as an adaptive adversary
who sees all activity of the forecaster except the randomization that the
forecaster may deploy. A number of papers have proposed randomized forecasting
strategies that achieve an $\epsilon$-calibration error rate of
$O(1/\sqrt{T})$, which we prove is tight in general. On the other hand, it is …
More from arxiv.org / stat.ML updates on arXiv.org
Learning linear dynamical systems under convex constraints
2 days, 21 hours ago |
arxiv.org
Inverse Unscented Kalman Filter
3 days, 21 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne