Aug. 5, 2022, 1:11 a.m. | Xinqi Wang, Qiwen Cui, Simon S. Du

cs.LG updates on arXiv.org arxiv.org

This paper presents a systematic study on gap-dependent sample complexity in
offline reinforcement learning. Prior work showed when the density ratio
between an optimal policy and the behavior policy is upper bounded (the optimal
policy coverage assumption), then the agent can achieve an
$O\left(\frac{1}{\epsilon^2}\right)$ rate, which is also minimax optimal. We
show under the optimal policy coverage assumption, the rate can be improved to
$O\left(\frac{1}{\epsilon}\right)$ when there is a positive sub-optimality gap
in the optimal $Q$-function. Furthermore, we show when …

arxiv gap learning lg reinforcement reinforcement learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Stagista Technical Data Engineer

@ Hager Group | BRESCIA, IT

Data Analytics - SAS, SQL - Associate

@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India