all AI news
Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic
Feb. 20, 2024, 5:44 a.m. | Tianying Ji, Yu Luo, Fuchun Sun, Xianyuan Zhan, Jianwei Zhang, Huazhe Xu
cs.LG updates on arXiv.org arxiv.org
Abstract: Learning high-quality Q-value functions plays a key role in the success of many modern off-policy deep reinforcement learning (RL) algorithms. Previous works focus on addressing the value overestimation issue, an outcome of adopting function approximators and off-policy learning. Deviating from the common viewpoint, we observe that Q-values are indeed underestimated in the latter stage of the RL training process, primarily related to the use of inferior actions from the current policy in Bellman updates as …
abstract actor actor-critic algorithms arxiv cs.ai cs.lg focus function functions issue key modern policy quality reinforcement reinforcement learning role success type value
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Engineer
@ Cint | Gurgaon, India
Data Science (M/F), setor automóvel - Aveiro
@ Segula Technologies | Aveiro, Portugal