April 4, 2024, 4:41 a.m. | Yi Shen, Hanyan Huang, Shan Xie

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.02545v1 Announce Type: new
Abstract: Offline reinforcement learning learns from a static dataset without interacting with the environment, which ensures security and thus owns a good prospect of application. However, directly applying naive reinforcement learning methods usually fails in an offline environment due to function approximation errors caused by out-of-distribution(OOD) actions. To solve this problem, existing algorithms mainly penalize the Q-value of OOD actions, the quality of whose constraints also matter. Imprecise constraints may lead to suboptimal solutions, while precise …

abstract application approximation arxiv count cs.ai cs.lg dataset distribution environment errors function good grid however mapping offline reinforcement reinforcement learning security solve the environment type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne