March 12, 2024, 4:42 a.m. | Zhenwen Dai, Federico Tomasi, Sina Ghiassian

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.06826v1 Announce Type: new
Abstract: In-context learning is a promising approach for online policy learning of offline reinforcement learning (RL) methods, which can be achieved at inference time without gradient optimization. However, this method is hindered by significant computational costs resulting from the gathering of large training trajectory sets and the need to train large Transformer models. We address this challenge by introducing an In-context Exploration-Exploitation (ICEE) algorithm, designed to optimize the efficiency of in-context policy learning. Unlike existing models, …

abstract arxiv computational context costs cs.ai cs.lg exploitation exploration gradient however in-context learning inference offline optimization policy reinforcement reinforcement learning stat.ml train training trajectory type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne