Web: http://arxiv.org/abs/2204.09560

May 5, 2022, 1:12 a.m. | Clare Lyle, Mark Rowland, Will Dabney

cs.LG updates on arXiv.org arxiv.org

The reinforcement learning (RL) problem is rife with sources of
non-stationarity, making it a notoriously difficult problem domain for the
application of neural networks. We identify a mechanism by which non-stationary
prediction targets can prevent learning progress in deep RL agents:
\textit{capacity loss}, whereby networks trained on a sequence of target values
lose their ability to quickly update their predictions over time. We
demonstrate that capacity loss occurs in a range of RL agents and environments,
and is particularly damaging …

arxiv capacity learning loss reinforcement reinforcement learning

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC