all AI news
Prior-dependent analysis of posterior sampling reinforcement learning with function approximation
March 19, 2024, 4:43 a.m. | Yingru Li, Zhi-Quan Luo
cs.LG updates on arXiv.org arxiv.org
Abstract: This work advances randomized exploration in reinforcement learning (RL) with function approximation modeled by linear mixture MDPs. We establish the first prior-dependent Bayesian regret bound for RL with function approximation; and refine the Bayesian regret analysis for posterior sampling reinforcement learning (PSRL), presenting an upper bound of ${\mathcal{O}}(d\sqrt{H^3 T \log T})$, where $d$ represents the dimensionality of the transition kernel, $H$ the planning horizon, and $T$ the total number of interactions. This signifies a methodological …
abstract advances analysis approximation arxiv bayesian cs.ai cs.it cs.lg exploration function linear math.it math.st posterior presenting prior refine reinforcement reinforcement learning sampling stat.ml stat.th type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US