April 9, 2024, 4:44 a.m. | Kaixin Huang, Li Shen, Chen Zhao, Chun Yuan, Dacheng Tao

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.08478v2 Announce Type: replace
Abstract: Continuous offline reinforcement learning (CORL) combines continuous and offline reinforcement learning, enabling agents to learn multiple tasks from static datasets without forgetting prior tasks. However, CORL faces challenges in balancing stability and plasticity. Existing methods, employing Actor-Critic structures and experience replay (ER), suffer from distribution shifts, low efficiency, and weak knowledge-sharing. We aim to investigate whether Decision Transformer (DT), another offline RL paradigm, can serve as a more suitable offline continuous learner to address these …

abstract actor actor-critic agents arxiv challenges continual continuous corl cs.ai cs.lg datasets decision distribution enabling experience however learn low multiple offline prior reinforcement reinforcement learning stability tasks transformer type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Tableau/PowerBI Developer (A.Con)

@ KPMG India | Bengaluru, Karnataka, India

Software Engineer, Backend - Data Platform (Big Data Infra)

@ Benchling | San Francisco, CA