April 12, 2024, 4:41 a.m. | Yunxiang Li, Rui Yuan, Chen Fan, Mark Schmidt, Samuel Horv\'ath, Robert M. Gower, Martin Tak\'a\v{c}

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.07525v1 Announce Type: new
Abstract: Policy gradient is a widely utilized and foundational algorithm in the field of reinforcement learning (RL). Renowned for its convergence guarantees and stability compared to other RL algorithms, its practical application is often hindered by sensitivity to hyper-parameters, particularly the step-size. In this paper, we introduce the integration of the Polyak step-size in RL, which automatically adjusts the step-size without prior knowledge. To adapt this method to RL settings, we address several issues, including unknown …

abstract algorithm algorithms application arxiv convergence cs.lg foundational gradient paper parameters policy practical reinforcement reinforcement learning sensitivity stability type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA