all AI news
Implicit regularization of multi-task learning and finetuning in overparameterized neural networks
March 8, 2024, 5:42 a.m. | Jack W. Lindsey, Samuel Lippl
cs.LG updates on arXiv.org arxiv.org
Abstract: In this work, we investigate the inductive biases that result from learning multiple tasks, either simultaneously (multi-task learning, MTL) or sequentially (pretraining and subsequent finetuning, PT+FT). In the simplified setting of two-layer diagonal linear networks trained with gradient descent, we apply prior theoretical results to describe novel implicit regularization penalties associated with MTL and PT+FT, both of which incentivize feature sharing between tasks and sparsity in learned task-specific features. Notably, these results imply that during …
abstract apply arxiv biases cs.lg finetuning gradient inductive layer linear multiple multi-task learning networks neural networks pretraining prior regularization results simplified tasks type work
More from arxiv.org / cs.LG updates on arXiv.org
Testable Learning with Distribution Shift
1 day, 2 hours ago |
arxiv.org
Quantum circuit synthesis with diffusion models
1 day, 2 hours ago |
arxiv.org
Fitness Approximation through Machine Learning
1 day, 2 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
GCP Data Engineer
@ Avant Digital | Delhi, DL, India