Sept. 28, 2022, 1:11 a.m. | Mingxi Tan, Andong Tian, Ludovic Denoyer

cs.LG updates on arXiv.org arxiv.org

Existing imitation learning methods mainly focus on making an agent
effectively mimic a demonstrated behavior, but do not address the potential
contradiction between the behavior style and the objective of a task. There is
a general lack of efficient methods that allow an agent to partially imitate a
demonstrated behavior to varying degrees, while completing the main objective
of a task. In this paper we propose a method called Regularized Soft
Actor-Critic which formulates the main task and the imitation …

actor-critic arxiv behavior transfer transfer learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Principal, Product Strategy Operations, Cloud Data Analytics

@ Google | Sunnyvale, CA, USA; Austin, TX, USA

Data Scientist - HR BU

@ ServiceNow | Hyderabad, India