all AI news
Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning
March 28, 2024, 4:43 a.m. | Trevor McInroe, Adam Jelley, Stefano V. Albrecht, Amos Storkey
cs.LG updates on arXiv.org arxiv.org
Abstract: Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm well matched to a real-world RL deployment process. In this scenario, we aim to find the best-performing policy within a limited budget of online interactions. Previous work in the OtO setting has focused on correcting for bias introduced by the policy-constraint mechanisms of offline RL algorithms. Such constraints keep the learned policy close to the behavior policy that collected …
abstract aim arxiv budget cs.lg dataset deployment distribution fine-tuning interactions offline online reinforcement learning paradigm planning policy pretraining process reinforcement reinforcement learning type work world
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US