all AI news
Off-Policy Evaluation for Large Action Spaces via Embeddings. (arXiv:2202.06317v2 [cs.LG] UPDATED)
Web: http://arxiv.org/abs/2202.06317
June 17, 2022, 1:11 a.m. | Yuta Saito, Thorsten Joachims
cs.LG updates on arXiv.org arxiv.org
Off-policy evaluation (OPE) in contextual bandits has seen rapid adoption in
real-world systems, since it enables offline evaluation of new policies using
only historic log data. Unfortunately, when the number of actions is large,
existing OPE estimators -- most of which are based on inverse propensity score
weighting -- degrade severely and can suffer from extreme bias and variance.
This foils the use of OPE in many applications from recommender systems to
language models. To overcome this issue, we propose …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY