Web: http://arxiv.org/abs/2206.05266

June 24, 2022, 1:11 a.m. | Xiang Li, Jinghuan Shang, Srijan Das, Michael S. Ryoo

cs.LG updates on arXiv.org arxiv.org

We investigate whether self-supervised learning (SSL) can improve online
reinforcement learning (RL) from pixels. We extend the contrastive
reinforcement learning framework (e.g., CURL) that jointly optimizes SSL and RL
losses and conduct an extensive amount of experiments with various
self-supervised losses. Our observations suggest that the existing SSL
framework for RL fails to bring meaningful improvement over the baselines only
taking advantage of image augmentation when the same amount of data and
augmentation is used. We further perform an evolutionary …

arxiv learning lg reinforcement reinforcement learning self-supervised learning supervised learning

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY