all AI news
A Simple and Optimal Policy Design with Safety against Heavy-tailed Risk for Multi-armed Bandits. (arXiv:2206.02969v3 [stat.ML] UPDATED)
July 5, 2022, 1:12 a.m. | David Simchi-Levi, Zeyu Zheng, Feng Zhu
stat.ML updates on arXiv.org arxiv.org
We design new policies that ensure both worst-case optimality for expected
regret and light-tailed risk for regret distribution in the stochastic
multi-armed bandit problem. Recently, arXiv:2109.13595 showed that
information-theoretically optimized bandit algorithms suffer from some serious
heavy-tailed risk; that is, the worst-case probability of incurring a linear
regret slowly decays at a polynomial rate of $1/T$, as $T$ (the time horizon)
increases. Inspired by their results, we further show that widely used policies
(e.g., Upper Confidence Bound, Thompson Sampling) also …
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US