Oct. 14, 2022, 1:12 a.m. | Denis Tarasov, Alexander Nikulin, Dmitry Akimov, Vladislav Kurenkov, Sergey Kolesnikov

cs.LG updates on arXiv.org arxiv.org

CORL is an open-source library that provides single-file implementations of
Deep Offline Reinforcement Learning algorithms. It emphasizes a simple
developing experience with a straightforward codebase and a modern analysis
tracking tool. In CORL, we isolate methods implementation into distinct single
files, making performance-relevant details easier to recognise. Additionally,
an experiment tracking feature is available to help log metrics,
hyperparameters, dependencies, and more to the cloud. Finally, we have ensured
the reliability of the implementations by benchmarking a commonly employed D4RL …

arxiv corl library offline reinforcement reinforcement learning research

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

RL Analytics - Content, Data Science Manager

@ Meta | Burlingame, CA

Research Engineer

@ BASF | Houston, TX, US, 77079