Aug. 23, 2022, 1:13 a.m. | Weichao Mao, Kaiqing Zhang, Ruihao Zhu, David Simchi-Levi, Tamer Başar

stat.ML updates on arXiv.org arxiv.org

We consider model-free reinforcement learning (RL) in non-stationary Markov
decision processes. Both the reward functions and the state transition
functions are allowed to vary arbitrarily over time as long as their cumulative
variations do not exceed certain variation budgets. We propose Restarted
Q-Learning with Upper Confidence Bounds (RestartQ-UCB), the first model-free
algorithm for non-stationary RL, and show that it outperforms existing
solutions in terms of dynamic regret. Specifically, RestartQ-UCB with
Freedman-type bonus terms achieves a dynamic regret bound of
$\widetilde{O}(S^{\frac{1}{3}} …

applications arxiv free inventory lg near rl

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US