all AI news
Upside-Down Reinforcement Learning Can Diverge in Stochastic Environments With Episodic Resets. (arXiv:2205.06595v1 [stat.ML])
May 16, 2022, 1:11 a.m. | Miroslav Štrupl, Francesco Faccio, Dylan R. Ashley, Jürgen Schmidhuber, Rupesh Kumar Srivastava
cs.LG updates on arXiv.org arxiv.org
Upside-Down Reinforcement Learning (UDRL) is an approach for solving RL
problems that does not require value functions and uses only supervised
learning, where the targets for given inputs in a dataset do not change over
time. Ghosh et al. proved that Goal-Conditional Supervised Learning (GCSL) --
which can be viewed as a simplified version of UDRL -- optimizes a lower bound
on goal-reaching performance. This raises expectations that such algorithms may
enjoy guaranteed convergence to the optimal policy in arbitrary …
arxiv learning ml reinforcement reinforcement learning stochastic
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Healthcare Data Modeler/Data Architect - REMOTE
@ Perficient | United States
Data Analyst – Sustainability, Green IT
@ H&M Group | Stockholm, Sweden
RWE Data Analyst
@ Sanofi | Hyderabad
Machine Learning Engineer
@ JPMorgan Chase & Co. | Jersey City, NJ, United States