July 5, 2022, 1:11 a.m. | David Simchi-Levi, Zeyu Zheng, Feng Zhu

cs.LG updates on arXiv.org arxiv.org

We design new policies that ensure both worst-case optimality for expected
regret and light-tailed risk for regret distribution in the stochastic
multi-armed bandit problem. Recently, arXiv:2109.13595 showed that
information-theoretically optimized bandit algorithms suffer from some serious
heavy-tailed risk; that is, the worst-case probability of incurring a linear
regret slowly decays at a polynomial rate of $1/T$, as $T$ (the time horizon)
increases. Inspired by their results, we further show that widely used policies
(e.g., Upper Confidence Bound, Thompson Sampling) also …

arxiv design ml multi-armed bandits policy risk safety

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US