March 19, 2024, 4:44 a.m. | Lingwei Zhu, Zheng Chen, Matthew Schlegel, Martha White

cs.LG updates on arXiv.org arxiv.org

arXiv:2301.11476v4 Announce Type: replace
Abstract: Many policy optimization approaches in reinforcement learning incorporate a Kullback-Leilbler (KL) divergence to the previous policy, to prevent the policy from changing too quickly. This idea was initially proposed in a seminal paper on Conservative Policy Iteration, with approximations given by algorithms like TRPO and Munchausen Value Iteration (MVI). We continue this line of work by investigating a generalized KL divergence -- called the Tsallis KL divergence -- which use the $q$-logarithm in the definition. …

abstract algorithms arxiv cs.ai cs.lg divergence generalized iteration optimization paper policy reinforcement reinforcement learning type value

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne