Feb. 5, 2024, 3:44 p.m. | Dengwang Tang Rahul Jain Botao Hao Zheng Wen

cs.LG updates on arXiv.org arxiv.org

In this paper, we study the problem of efficient online reinforcement learning in the infinite horizon setting when there is an offline dataset to start with. We assume that the offline dataset is generated by an expert but with unknown level of competence, i.e., it is not perfect and not necessarily using the optimal policy. We show that if the learning agent models the behavioral policy (parameterized by a competence parameter) used by the expert, it can do substantially better …

bayesian cs.ai cs.lg cs.sy dataset datasets eess.sy expert generated horizon offline online learning online reinforcement learning paper reinforcement reinforcement learning stat.ml study

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States