Web: http://arxiv.org/abs/2201.04122

Jan. 12, 2022, 2:10 a.m. | Vitaly Kurin, Alessandro De Palma, Ilya Kostrikov, Shimon Whiteson, M. Pawan Kumar

cs.LG updates on arXiv.org arxiv.org

Recent multi-task learning research argues against unitary scalarization,
where training simply minimizes the sum of the task losses. Several ad-hoc
multi-task optimization algorithms have instead been proposed, inspired by
various hypotheses about what makes multi-task settings difficult. The majority
of these optimizers require per-task gradients, and introduce significant
memory, runtime, and implementation overhead. We present a theoretical analysis
suggesting that many specialized multi-task optimizers can be interpreted as
forms of regularization. Moreover, we show that, when coupled with standard
regularization and stabilization techniques from single-task learning, unitary
scalarization matches or …

arxiv deep defense for learning multi-task learning

Statistics and Computer Science Specialist

@ Hawk-Research | Remote

Data Scientist, Credit/Fraud Strategy

@ Fora Financial | New York City

Postdoctoral Research Associate - Biomedical Natural Language Processing and Deep Learning

@ Oak Ridge National Laboratory - Oak Ridge, TN | Oak Ridge, TN, United States

Senior Machine Learning / Computer Vision Engineer

@ Glass Imaging | Los Altos, CA

Research Scientist in Biomedical Natural Language Processing and Deep Learning

@ Oak Ridge National Laboratory | Oak Ridge, TN

W3-Professorship for Intelligent Energy Management

@ Universität Bayreuth | Bayreuth, Germany