all AI news
Minimax Optimal and Computationally Efficient Algorithms for Distributionally Robust Offline Reinforcement Learning
March 15, 2024, 4:41 a.m. | Zhishuai Liu, Pan Xu
cs.LG updates on arXiv.org arxiv.org
Abstract: Distributionally robust offline reinforcement learning (RL), which seeks robust policy training against environment perturbation by modeling dynamics uncertainty, calls for function approximations when facing large state-action spaces. However, the consideration of dynamics uncertainty introduces essential nonlinearity and computational burden, posing unique challenges for analyzing and practically employing function approximation. Focusing on a basic setting where the nominal model and perturbed models are linearly parameterized, we propose minimax optimal and computationally efficient algorithms realizing function approximation …
abstract algorithms arxiv challenges computational cs.ai cs.lg dynamics environment function however minimax modeling offline policy reinforcement reinforcement learning robust spaces state stat.ml training type uncertainty
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer - New Graduate
@ Applied Materials | Milan,ITA
Lead Machine Learning Scientist
@ Biogen | Cambridge, MA, United States