Web: http://arxiv.org/abs/2106.04217

May 9, 2022, 1:11 a.m. | Ghada Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, Peter Stone

cs.LG updates on arXiv.org arxiv.org

Deep reinforcement learning (DRL) agents are trained through trial-and-error
interactions with the environment. This leads to a long training time for dense
neural networks to achieve good performance. Hence, prohibitive computation and
memory resources are consumed. Recently, learning efficient DRL agents has
received increasing attention. Yet, current methods focus on accelerating
inference time. In this paper, we introduce for the first time a dynamic sparse
training approach for deep reinforcement learning to accelerate the training
process. The proposed approach trains …

arxiv deep learning reinforcement reinforcement learning training

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC