April 2, 2024, 7:44 p.m. | Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2205.13476v2 Announce Type: replace
Abstract: Reinforcement learning in partially observed Markov decision processes (POMDPs) faces two challenges. (i) It often takes the full history to predict the future, which induces a sample complexity that scales exponentially with the horizon. (ii) The observation and state spaces are often continuous, which induces a sample complexity that scales exponentially with the extrinsic dimension. Addressing such challenges requires learning a minimal but sufficient representation of the observation and state histories by exploiting the structure …

abstract arxiv challenges complexity continuous control cs.ai cs.lg cs.sy decision eess.sy efficiency embed future history horizon markov observation processes reinforcement reinforcement learning representation representation learning sample spaces state stat.ml systems type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Science Analyst- ML/DL/LLM

@ Mayo Clinic | Jacksonville, FL, United States

Machine Learning Research Scientist, Robustness and Uncertainty

@ Nuro, Inc. | Mountain View, California (HQ)