Feb. 19, 2024, 5:43 a.m. | Yihan Du, R. Srikant, Wei Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.08961v2 Announce Type: replace
Abstract: Cascading bandits have gained popularity in recent years due to their applicability to recommendation systems and online advertising. In the cascading bandit model, at each timestep, an agent recommends an ordered subset of items (called an item list) from a pool of items, each associated with an unknown attraction probability. Then, the user examines the list, and clicks the first attractive item (if any), and after that, the agent receives a reward. The goal of …

abstract advertising agent arxiv cs.lg list online advertising pool probability recommendation recommendation systems reinforcement reinforcement learning systems type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South