March 5, 2024, 2:44 p.m. | Yota Hashizume, Koshi Oishi, Kenji Kashima

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.01805v1 Announce Type: cross
Abstract: Shannon entropy regularization is widely adopted in optimal control due to its ability to promote exploration and enhance robustness, e.g., maximum entropy reinforcement learning known as Soft Actor-Critic. In this paper, Tsallis entropy, which is a one-parameter extension of Shannon entropy, is used for the regularization of linearly solvable MDP and linear quadratic regulators. We derive the solution for these problems and demonstrate its usefulness in balancing between exploration and sparsity of the obtained control …

abstract actor actor-critic arxiv control cs.lg cs.sy eess.sy entropy exploration extension linear math.oc paper promote regularization regulator reinforcement reinforcement learning robustness type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Cint | Gurgaon, India

Data Science (M/F), setor automóvel - Aveiro

@ Segula Technologies | Aveiro, Portugal