Oct. 28, 2022, 1:12 a.m. | Juan Cervino, Juan Andres Bazerque, Miguel Calvo-Fullana, Alejandro Ribeiro

cs.LG updates on arXiv.org arxiv.org

Multi-task learning aims to acquire a set of functions, either regressors or
classifiers, that perform well for diverse tasks. At its core, the idea behind
multi-task learning is to exploit the intrinsic similarity across data sources
to aid in the learning process for each individual domain. In this paper we
draw intuition from the two extreme learning scenarios -- a single function for
all tasks, and a task-specific function that ignores the other tasks
dependencies -- to propose a bias-variance …

arxiv bias bias-variance constraints trade variance

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain