April 16, 2024, 4:44 a.m. | Huozhi Zhou, Jinglin Chen, Lav R. Varshney, Ashish Jagmohan

cs.LG updates on arXiv.org arxiv.org

arXiv:2010.04244v3 Announce Type: replace
Abstract: We consider reinforcement learning (RL) in episodic Markov decision processes (MDPs) with linear function approximation under drifting environment. Specifically, both the reward and state transition functions can evolve over time but their total variations do not exceed a $\textit{variation budget}$. We first develop $\texttt{LSVI-UCB-Restart}$ algorithm, an optimistic modification of least-squares value iteration with periodic restart, and bound its dynamic regret when variation budgets are known. Then we propose a parameter-free algorithm $\texttt{Ada-LSVI-UCB-Restart}$ that extends to …

abstract algorithm approximation arxiv budget cs.lg decision environment function functions linear markov processes reinforcement reinforcement learning state stat.ml total transition type variation

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York