Oct. 6, 2022, 1:12 a.m. | Valentin Guillet, Dennis G. Wilson, Carlos Aguilar-Melchor, Emmanuel Rachelson

cs.LG updates on arXiv.org arxiv.org

Although transfer learning is considered to be a milestone in deep
reinforcement learning, the mechanisms behind it are still poorly understood.
In particular, predicting if knowledge can be transferred between two given
tasks is still an unresolved problem. In this work, we explore the use of
network distillation as a feature extraction method to better understand the
context in which transfer can occur. Notably, we show that distillation does
not prevent knowledge transfer, including when transferring from multiple tasks
to …

arxiv consolidation reinforcement reinforcement learning transfer

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote