April 1, 2024, 4:42 a.m. | Ahmed Agiza, Marina Neseem, Sherief Reda

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.20320v1 Announce Type: cross
Abstract: Adapting models pre-trained on large-scale datasets to a variety of downstream tasks is a common strategy in deep learning. Consequently, parameter-efficient fine-tuning methods have emerged as a promising way to adapt pre-trained models to different tasks while training only a minimal number of parameters. While most of these methods are designed for single-task adaptation, parameter-efficient training in Multi-Task Learning (MTL) architectures is still unexplored. In this paper, we introduce MTLoRA, a novel framework for parameter-efficient …

abstract adapt arxiv cs.ai cs.cv cs.lg datasets deep learning fine-tuning low low-rank adaptation multi-task learning parameters pre-trained models scale strategy tasks training type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote