Web: http://arxiv.org/abs/2205.00943

May 4, 2022, 1:12 a.m. | Chenyu Sun, Hangwei Qian, Chunyan Miao

cs.LG updates on arXiv.org arxiv.org

In reinforcement learning (RL), it is challenging to learn directly from
high-dimensional observations, where data augmentation has recently been shown
to remedy this via encoding invariances from raw pixels. Nevertheless, we
empirically find that not all samples are equally important and hence simply
injecting more augmented inputs may instead cause instability in Q-learning. In
this paper, we approach this problem systematically by developing a
model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF), which
can fully exploit sample importance and improve learning efficiency in …

arxiv framework learning reinforcement reinforcement learning

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC