June 23, 2022, 1:10 a.m. | Jinglin Chen, Aditya Modi, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal

cs.LG updates on arXiv.org arxiv.org

We study reward-free reinforcement learning (RL) under general non-linear
function approximation, and establish sample efficiency and hardness results
under various standard structural assumptions. On the positive side, we propose
the RFOLIVE (Reward-Free OLIVE) algorithm for sample-efficient reward-free
exploration under minimal structural assumptions, which covers the previously
studied settings of linear MDPs (Jin et al., 2020b), linear completeness
(Zanette et al., 2020b) and low-rank MDPs with unknown representation (Modi et
al., 2021). Our analyses indicate that the explorability or reachability
assumptions, …

arxiv efficiency exploration free lg linear non-linear rl statistical

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US