all AI news
In Defense of the Unitary Scalarization for Deep Multi-Task Learning. (arXiv:2201.04122v1 [cs.LG])
Jan. 12, 2022, 2:10 a.m. | Vitaly Kurin, Alessandro De Palma, Ilya Kostrikov, Shimon Whiteson, M. Pawan Kumar
cs.LG updates on arXiv.org arxiv.org
Recent multi-task learning research argues against unitary scalarization,
where training simply minimizes the sum of the task losses. Several ad-hoc
multi-task optimization algorithms have instead been proposed, inspired by
various hypotheses about what makes multi-task settings difficult. The majority
of these optimizers require per-task gradients, and introduce significant
memory, runtime, and implementation overhead. We present a theoretical analysis
suggesting that many specialized multi-task optimizers can be interpreted as
forms of regularization. Moreover, we show that, when coupled with standard
regularization …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Research Assistant/Associate, Health Data Science [LKCMedicine]
@ Nanyang Technological University | NTU Novena Campus, Singapore
Senior Machine Learning Engineer, Portfolio ML
@ Affirm | Remote Canada
[Sessional Lecturer] Foundations of Data Analytics and Machine Learning - APS1070
@ University of Toronto | Toronto, ON, CA
Senior Data Scientist
@ Prosper | United States
Data Analyst
@ ZF Friedrichshafen AG | Coimbatore, TN, IN, 641659