March 18, 2024, 4:42 a.m. | Ishita Mediratta, Qingfei You, Minqi Jiang, Roberta Raileanu

cs.LG updates on arXiv.org arxiv.org

arXiv:2312.05742v2 Announce Type: replace
Abstract: Despite recent progress in offline learning, these methods are still trained and tested on the same environment. In this paper, we compare the generalization abilities of widely used online and offline learning methods such as online reinforcement learning (RL), offline RL, sequence modeling, and behavioral cloning. Our experiments show that offline learning algorithms perform worse on new environments than online learning ones. We also introduce the first benchmark for evaluating generalization in offline learning, collecting …

abstract arxiv cloning cs.ai cs.lg environment gap modeling offline online reinforcement learning paper progress reinforcement reinforcement learning type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South