all AI news
Sample Complexity of Preference-Based Nonparametric Off-Policy Evaluation with Deep Networks
Feb. 28, 2024, 5:43 a.m. | Zihao Li, Xiang Ji, Minshuo Chen, Mengdi Wang
cs.LG updates on arXiv.org arxiv.org
Abstract: A recently popular approach to solving reinforcement learning is with data from human preferences. In fact, human preference data are now used with classic reinforcement learning algorithms such as actor-critic methods, which involve evaluating an intermediate policy over a reward learned from human preference data with distribution shift, known as off-policy evaluation (OPE). Such algorithm includes (i) learning reward function from human preference dataset, and (ii) learning expected cumulative reward of a target policy. Despite …
abstract actor actor-critic algorithms arxiv complexity cs.lg data evaluation human intermediate networks policy popular reinforcement reinforcement learning sample stat.ml type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Tableau/PowerBI Developer (A.Con)
@ KPMG India | Bengaluru, Karnataka, India
Software Engineer, Backend - Data Platform (Big Data Infra)
@ Benchling | San Francisco, CA