May 16, 2022, 1:11 a.m. | Yue Zhao, Chenzhuang Du, Hang Zhao, Tiejun Li

cs.LG updates on arXiv.org arxiv.org

In vision-based reinforcement learning (RL) tasks, it is prevalent to assign
auxiliary tasks with a surrogate self-supervised loss so as to obtain more
semantic representations and improve sample efficiency. However, abundant
information in self-supervised auxiliary tasks has been disregarded, since the
representation learning part and the decision-making part are separated. To
sufficiently utilize information in auxiliary tasks, we present a simple yet
effective idea to employ self-supervised loss as an intrinsic reward, called
Intrinsically Motivated Self-Supervised learning in Reinforcement learning …

arxiv learning reinforcement reinforcement learning self-supervised learning supervised learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Program Control Data Analyst

@ Ford Motor Company | Mexico

Vice President, Business Intelligence / Data & Analytics

@ AlphaSense | Remote - United States