all AI news
Off-Policy Correction for Actor-Critic Methods without Importance Sampling. (arXiv:2208.00755v2 [cs.LG] UPDATED)
Oct. 26, 2022, 1:12 a.m. | Baturay Saglam, Dogan C. Cicek, Furkan B. Mutlu, Suleyman S. Kozat
cs.LG updates on arXiv.org arxiv.org
Compared to on-policy policy gradient techniques, off-policy model-free deep
reinforcement learning (RL) that uses previously gathered data can improve
sampling efficiency. However, off-policy learning becomes challenging when the
discrepancy between the distributions of the policy of interest and the
policies that collected the data increases. Although the well-studied
importance sampling and off-policy policy gradient techniques were proposed to
compensate for this discrepancy, they usually require a collection of long
trajectories that increases the computational complexity and induce additional
problems such …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Developer AI Senior Staff Engineer, Machine Learning
@ Google | Sunnyvale, CA, USA; New York City, USA
Engineer* Cloud & Data Operations (f/m/d)
@ SICK Sensor Intelligence | Waldkirch (bei Freiburg), DE, 79183