Jan. 6, 2022, 2:10 a.m. | Joe Suk, Samory Kpotufe

cs.LG updates on arXiv.org arxiv.org

In bandits with distribution shifts, one aims to automatically detect an
unknown number $L$ of changes in reward distribution, and restart exploration
when necessary. While this problem remained open for many years, a recent
breakthrough of Auer et al. (2018, 2019) provide the first adaptive procedure
to guarantee an optimal (dynamic) regret $\sqrt{LT}$, for $T$ rounds, with no
knowledge of $L$. However, not all distributional shifts are equally severe,
e.g., suppose no best arm switches occur, then we cannot rule …

arm arxiv tracking

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Quality, Senior Analyst

@ Toyota North America | Plano

Data Analyst & Audit Management Software (AMS) Coordinator

@ World Vision | Philippines - Home Working

Product Manager Power BI Platform Tech I&E Operational Insights

@ ING | HBP (Amsterdam - Haarlerbergpark)

Sr. Director, Software Engineering, Clinical Data Strategy

@ Moderna | USA-Washington-Seattle-1099 Stewart Street

Data Engineer (Data as a Service)

@ Xplor | Atlanta, GA, United States