Jan. 10, 2022, 2:10 a.m. | Quan Feng, Songcan Chen

cs.LG updates on arXiv.org arxiv.org

Multi-task learning is to improve the performance of the model by
transferring and exploiting common knowledge among tasks. Existing MTL works
mainly focus on the scenario where label sets among multiple tasks (MTs) are
usually the same, thus they can be utilized for learning across the tasks.
While almost rare works explore the scenario where each task only has a small
amount of training samples, and their label sets are just partially overlapped
or even not. Learning such MTs is …

arxiv big learning

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst

@ SEAKR Engineering | Englewood, CO, United States

Data Analyst II

@ Postman | Bengaluru, India

Data Architect

@ FORSEVEN | Warwick, GB

Director, Data Science

@ Visa | Washington, DC, United States

Senior Manager, Data Science - Emerging ML

@ Capital One | McLean, VA