Feb. 13, 2024, 5:45 a.m. | Chenlu Ye Wei Xiong Quanquan Gu Tong Zhang

cs.LG updates on arXiv.org arxiv.org

Despite the significant interest and progress in reinforcement learning (RL) problems with adversarial corruption, current works are either confined to the linear setting or lead to an undesired $\tilde{O}(\sqrt{T}\zeta)$ regret bound, where $T$ is the number of rounds and $\zeta$ is the total amount of corruption. In this paper, we consider the contextual bandit with general function approximation and propose a computationally efficient algorithm to achieve a regret of $\tilde{O}(\sqrt{T}+\zeta)$. The proposed algorithm relies on the recently developed uncertainty-weighted least-squares …

adversarial algorithms corruption cs.lg current decision linear markov processes progress reinforcement reinforcement learning robust stat.ml total uncertainty

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

Software Engineer III -Full Stack Developer - ModelOps, MLOps

@ JPMorgan Chase & Co. | NY, United States

Senior Lead Software Engineer - Full Stack Senior Developer - ModelOps, MLOps

@ JPMorgan Chase & Co. | NY, United States

Software Engineer III - Full Stack Developer - ModelOps, MLOps

@ JPMorgan Chase & Co. | NY, United States

Research Scientist (m/w/d) - Numerische Simulation Laser-Materie-Wechselwirkung

@ Fraunhofer-Gesellschaft | Freiburg, DE, 79104

Research Scientist, Speech Real-Time Dialog

@ Google | Mountain View, CA, USA