March 5, 2024, 2:45 p.m. | Gen Li, Laixi Shi, Yuxin Chen, Yuejie Chi, Yuting Wei

cs.LG updates on arXiv.org arxiv.org

arXiv:2204.05275v3 Announce Type: replace-cross
Abstract: This paper is concerned with offline reinforcement learning (RL), which learns using pre-collected data without further exploration. Effective offline RL would be able to accommodate distribution shift and limited data coverage. However, prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality, thus posing an impediment to efficient offline RL in sample-starved applications.
We demonstrate that the model-based (or "plug-in") approach achieves minimax-optimal sample complexity without …

abstract algorithms arxiv complexities complexity coverage cs.it cs.lg cs.sy data distribution eess.sy exploration math.it math.st offline paper prior reinforcement reinforcement learning sample shift stat.ml stat.th type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote