all AI news
On the Reuse Bias in Off-Policy Reinforcement Learning. (arXiv:2209.07074v1 [cs.LG])
Sept. 16, 2022, 1:11 a.m. | Chengyang Ying, Zhongkai Hao, Xinning Zhou, Hang Su, Dong Yan, Jun Zhu
cs.LG updates on arXiv.org arxiv.org
Importance sampling (IS) is a popular technique in off-policy evaluation,
which re-weights the return of trajectories in the replay buffer to boost
sample efficiency. However, training with IS can be unstable and previous
attempts to address this issue mainly focus on analyzing the variance of IS. In
this paper, we reveal that the instability is also related to a new notion of
Reuse Bias of IS -- the bias in off-policy evaluation caused by the reuse of
the replay buffer …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineer, Data Tools - Full Stack
@ DoorDash | Pune, India
Senior Data Analyst
@ Artsy | New York City