June 16, 2022, 1:11 a.m. | Maxim Kaledin, Alexander Golubev, Denis Belomestny

cs.LG updates on arXiv.org arxiv.org

Policy-gradient methods in Reinforcement Learning(RL) are very universal and
widely applied in practice but their performance suffers from the high variance
of the gradient estimate. Several procedures were proposed to reduce it
including actor-critic(AC) and advantage actor-critic(A2C) methods. Recently
the approaches have got new perspective due to the introduction of Deep RL:
both new control variates(CV) and new sub-sampling procedures became available
in the setting of complex models like neural networks. The vital part of
CV-based methods is the goal …

arxiv gradient lg policy policy-gradient variance

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne