Oct. 27, 2022, 1:12 a.m. | Sicen Li, Qinyun Tang, Yiming Pang, Xinmeng Ma, Gang Wang

cs.LG updates on arXiv.org arxiv.org

Model-free deep reinforcement learning (RL) has been successfully applied to
challenging continuous control domains. However, poor sample efficiency
prevents these methods from being widely used in real-world domains. This paper
introduces a novel model-free algorithm, Realistic Actor-Critic(RAC), which can
be incorporated with any off-policy RL algorithms to improve sample efficiency.
RAC employs Universal Value Function Approximators (UVFA) to simultaneously
learn a policy family with the same neural network, each with different
trade-offs between underestimation and overestimation. To learn such policies, …

actor-critic arxiv value

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Cint | Gurgaon, India

Data Science (M/F), setor automóvel - Aveiro

@ Segula Technologies | Aveiro, Portugal