all AI news
Distributional Hamilton-Jacobi-Bellman Equations for Continuous-Time Reinforcement Learning. (arXiv:2205.12184v2 [cs.LG] UPDATED)
Web: http://arxiv.org/abs/2205.12184
June 20, 2022, 1:11 a.m. | Harley Wiltzer, David Meger, Marc G. Bellemare
cs.LG updates on arXiv.org arxiv.org
Continuous-time reinforcement learning offers an appealing formalism for
describing control problems in which the passage of time is not naturally
divided into discrete increments. Here we consider the problem of predicting
the distribution of returns obtained by an agent interacting in a
continuous-time, stochastic environment. Accurate return predictions have
proven useful for determining optimal policies for risk-sensitive control,
learning state representations, multiagent coordination, and more. We begin by
establishing the distributional analogue of the Hamilton-Jacobi-Bellman (HJB)
equation for It\^o diffusions …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY