Oct. 13, 2022, 1:13 a.m. | Zixin Zhong, Wang Chi Cheung, Vincent Y. F. Tan

cs.LG updates on arXiv.org arxiv.org

We study the Pareto frontier of two archetypal objectives in multi-armed
bandits, namely, regret minimization (RM) and best arm identification (BAI)
with a fixed horizon. It is folklore that the balance between exploitation and
exploration is crucial for both RM and BAI, but exploration is more critical in
achieving the optimal performance for the latter objective. To this end, we
design and analyze the BoBW-lil'UCB$(\gamma)$ algorithm. Complementarily, by
establishing lower bounds on the regret achievable by any algorithm with a …

arm arxiv identification multi-armed bandits

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US