all AI news
Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning
March 28, 2024, 4:43 a.m. | Trevor McInroe, Adam Jelley, Stefano V. Albrecht, Amos Storkey
cs.LG updates on arXiv.org arxiv.org
Abstract: Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm well matched to a real-world RL deployment process. In this scenario, we aim to find the best-performing policy within a limited budget of online interactions. Previous work in the OtO setting has focused on correcting for bias introduced by the policy-constraint mechanisms of offline RL algorithms. Such constraints keep the learned policy close to the behavior policy that collected …
abstract aim arxiv budget cs.lg dataset deployment distribution fine-tuning interactions offline online reinforcement learning paradigm planning policy pretraining process reinforcement reinforcement learning type work world
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst (Digital Business Analyst)
@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore