Feb. 19, 2024, 5:43 a.m. | Yihan Du, R. Srikant, Wei Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.08961v2 Announce Type: replace
Abstract: Cascading bandits have gained popularity in recent years due to their applicability to recommendation systems and online advertising. In the cascading bandit model, at each timestep, an agent recommends an ordered subset of items (called an item list) from a pool of items, each associated with an unknown attraction probability. Then, the user examines the list, and clicks the first attractive item (if any), and after that, the agent receives a reward. The goal of …

abstract advertising agent arxiv cs.lg list online advertising pool probability recommendation recommendation systems reinforcement reinforcement learning systems type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York