March 22, 2024, 4:43 a.m. | Arsenii Mustafin, Alex Olshevsky, Ioannis Ch. Paschalidis

cs.LG updates on arXiv.org arxiv.org

arXiv:2211.16237v3 Announce Type: replace
Abstract: Temporal difference (TD) learning is a policy evaluation in reinforcement learning whose performance can be enhanced by variance reduction methods. Recently, multiple works have sought to fuse TD learning with Stochastic Variance Reduced Gradient (SVRG) method to achieve a geometric rate of convergence. However, the resulting convergence rate is significantly weaker than what is achieved by SVRG in the setting of convex optimization. In this work we utilize a recent interpretation of TD-learning as the …

abstract arxiv convergence cs.lg difference evaluation gap gradient however multiple performance policy rate reinforcement reinforcement learning stochastic temporal type variance

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne