March 12, 2024, 4:44 a.m. | Vincent Leon, S. Rasoul Etesami

cs.LG updates on arXiv.org arxiv.org

arXiv:2304.00155v3 Announce Type: replace
Abstract: We consider online reinforcement learning in episodic Markov decision process (MDP) with unknown transition function and stochastic rewards drawn from some fixed but unknown distribution. The learner aims to learn the optimal policy and minimize their regret over a finite time horizon through interacting with the environment. We devise a simple and efficient model-based algorithm that achieves $\widetilde{O}(LX\sqrt{TA})$ regret with high probability, where $L$ is the episode length, $T$ is the number of episodes, and …

abstract arxiv cs.lg cs.sy decision distribution eess.sy function horizon learn linear markov math.oc online reinforcement learning policy process programming reinforcement reinforcement learning stochastic through transition type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States