April 23, 2024, 4:43 a.m. | Sibo Gai, Donglin Wang, Li He

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.13804v2 Announce Type: replace
Abstract: The capability of continuously learning new skills via a sequence of pre-collected offline datasets is desired for an agent. However, consecutively learning a sequence of offline tasks likely leads to the catastrophic forgetting issue under resource-limited scenarios. In this paper, we formulate a new setting, continual offline reinforcement learning (CORL), where an agent learns a sequence of offline reinforcement learning tasks and pursues good performance on all learned tasks with a small replay buffer without …

abstract agent arxiv capability catastrophic forgetting continual cs.lg datasets experience however issue leads offline paper reinforcement reinforcement learning skills tasks type via

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York