all AI news
Offline RL With Realistic Datasets: Heteroskedasticity and Support Constraints. (arXiv:2211.01052v1 [cs.LG])
stat.ML updates on arXiv.org arxiv.org
Offline reinforcement learning (RL) learns policies entirely from static
datasets, thereby avoiding the challenges associated with online data
collection. Practical applications of offline RL will inevitably require
learning from datasets where the variability of demonstrated behaviors changes
non-uniformly across the state space. For example, at a red light, nearly all
human drivers behave similarly by stopping, but when merging onto a highway,
some drivers merge quickly, efficiently, and safely, while many hesitate or
merge dangerously. Both theoretically and empirically, we …
arxiv constraints datasets heteroskedasticity offline support