Nov. 3, 2022, 1:12 a.m. | Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, Sergey Levine

cs.LG updates on arXiv.org arxiv.org

Offline reinforcement learning (RL) learns policies entirely from static
datasets, thereby avoiding the challenges associated with online data
collection. Practical applications of offline RL will inevitably require
learning from datasets where the variability of demonstrated behaviors changes
non-uniformly across the state space. For example, at a red light, nearly all
human drivers behave similarly by stopping, but when merging onto a highway,
some drivers merge quickly, efficiently, and safely, while many hesitate or
merge dangerously. Both theoretically and empirically, we …

arxiv constraints datasets heteroskedasticity offline support

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US