Web: http://arxiv.org/abs/2202.06450

May 4, 2022, 1:12 a.m. | Jiawei Huang, Jinglin Chen, Li Zhao, Tao Qin, Nan Jiang, Tie-Yan Liu

cs.LG updates on arXiv.org arxiv.org

Deployment efficiency is an important criterion for many real-world
applications of reinforcement learning (RL). Despite the community's increasing
interest, there lacks a formal theoretical formulation for the problem. In this
paper, we propose such a formulation for deployment-efficient RL (DE-RL) from
an "optimization with constraints" perspective: we are interested in exploring
an MDP and obtaining a near-optimal policy within minimal \emph{deployment
complexity}, whereas in each deployment the policy can sample a large batch of
data. Using finite-horizon linear MDPs as …

arxiv deployment learning reinforcement reinforcement learning

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC