Aug. 31, 2022, 1:11 a.m. | Marco Bagatella, Sammy Christen, Otmar Hilliges

cs.LG updates on arXiv.org arxiv.org

Efficient exploration is a crucial challenge in deep reinforcement learning.
Several methods, such as behavioral priors, are able to leverage offline data
in order to efficiently accelerate reinforcement learning on complex tasks.
However, if the task at hand deviates excessively from the demonstrated task,
the effectiveness of such methods is limited. In our work, we propose to learn
features from offline data that are shared by a more diverse range of tasks,
such as correlation between actions and directedness. Therefore, …

arxiv exploration free learning policy reinforcement reinforcement learning state

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Strategy & Management - Private Equity Sector - Manager - Consulting - Location OPEN

@ EY | New York City, US, 10001-8604

Data Engineer- People Analytics

@ Volvo Group | Gothenburg, SE, 40531