June 27, 2022, 1:11 a.m. | Yue Wu, Jesús A. De Loera

cs.LG updates on arXiv.org arxiv.org

Recently discovered polyhedral structures of the value function for finite
state-action discounted Markov decision processes (MDP) shed light on
understanding the success of reinforcement learning. We investigate the value
function polytope in greater detail and characterize the polytope boundary
using a hyperplane arrangement. We further show that the value space is a union
of finitely many cells of the same hyperplane arrangement and relate it to the
polytope of the classical linear programming formulation for MDPs. Inspired by
these geometric …

arxiv decision iteration lg markov policy processes

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru