April 9, 2024, 4:44 a.m. | Kaixin Huang, Li Shen, Chen Zhao, Chun Yuan, Dacheng Tao

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.08478v2 Announce Type: replace
Abstract: Continuous offline reinforcement learning (CORL) combines continuous and offline reinforcement learning, enabling agents to learn multiple tasks from static datasets without forgetting prior tasks. However, CORL faces challenges in balancing stability and plasticity. Existing methods, employing Actor-Critic structures and experience replay (ER), suffer from distribution shifts, low efficiency, and weak knowledge-sharing. We aim to investigate whether Decision Transformer (DT), another offline RL paradigm, can serve as a more suitable offline continuous learner to address these …

abstract actor actor-critic agents arxiv challenges continual continuous corl cs.ai cs.lg datasets decision distribution enabling experience however learn low multiple offline prior reinforcement reinforcement learning stability tasks transformer type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US