April 2, 2024, 7:44 p.m. | Qi Cai, Zhuoran Yang, Zhaoran Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2204.09787v3 Announce Type: replace
Abstract: We study reinforcement learning for partially observed Markov decision processes (POMDPs) with infinite observation and state spaces, which remains less investigated theoretically. To this end, we make the first attempt at bridging partial observability and function approximation for a class of POMDPs with a linear structure. In detail, we propose a reinforcement learning algorithm (Optimistic Exploration via Adversarial Integral Equation or OP-TENET) that attains an $\epsilon$-optimal policy within $O(1/\epsilon^2)$ episodes. In particular, the sample complexity …

abstract approximation arxiv class cs.lg decision efficiency function linear markov math.oc observability observation processes reinforcement reinforcement learning sample spaces state stat.ml study type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne