all AI news
Variance Reduction for Policy-Gradient Methods via Empirical Variance Minimization. (arXiv:2206.06827v2 [cs.LG] UPDATED)
Web: http://arxiv.org/abs/2206.06827
June 16, 2022, 1:11 a.m. | Maxim Kaledin, Alexander Golubev, Denis Belomestny
cs.LG updates on arXiv.org arxiv.org
Policy-gradient methods in Reinforcement Learning(RL) are very universal and
widely applied in practice but their performance suffers from the high variance
of the gradient estimate. Several procedures were proposed to reduce it
including actor-critic(AC) and advantage actor-critic(A2C) methods. Recently
the approaches have got new perspective due to the introduction of Deep RL:
both new control variates(CV) and new sub-sampling procedures became available
in the setting of complex models like neural networks. The vital part of
CV-based methods is the goal …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY