March 8, 2024, 5:42 a.m. | Jack W. Lindsey, Samuel Lippl

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.02396v2 Announce Type: replace
Abstract: In this work, we investigate the inductive biases that result from learning multiple tasks, either simultaneously (multi-task learning, MTL) or sequentially (pretraining and subsequent finetuning, PT+FT). In the simplified setting of two-layer diagonal linear networks trained with gradient descent, we apply prior theoretical results to describe novel implicit regularization penalties associated with MTL and PT+FT, both of which incentivize feature sharing between tasks and sparsity in learned task-specific features. Notably, these results imply that during …

abstract apply arxiv biases cs.lg finetuning gradient inductive layer linear multiple multi-task learning networks neural networks pretraining prior regularization results simplified tasks type work

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

GCP Data Engineer

@ Avant Digital | Delhi, DL, India