all AI news
MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning
April 1, 2024, 4:42 a.m. | Ahmed Agiza, Marina Neseem, Sherief Reda
cs.LG updates on arXiv.org arxiv.org
Abstract: Adapting models pre-trained on large-scale datasets to a variety of downstream tasks is a common strategy in deep learning. Consequently, parameter-efficient fine-tuning methods have emerged as a promising way to adapt pre-trained models to different tasks while training only a minimal number of parameters. While most of these methods are designed for single-task adaptation, parameter-efficient training in Multi-Task Learning (MTL) architectures is still unexplored. In this paper, we introduce MTLoRA, a novel framework for parameter-efficient …
abstract adapt arxiv cs.ai cs.cv cs.lg datasets deep learning fine-tuning low low-rank adaptation multi-task learning parameters pre-trained models scale strategy tasks training type
More from arxiv.org / cs.LG updates on arXiv.org
Sliced Wasserstein with Random-Path Projecting Directions
1 day, 11 hours ago |
arxiv.org
Learning Extrinsic Dexterity with Parameterized Manipulation Primitives
1 day, 11 hours ago |
arxiv.org
The Un-Kidnappable Robot: Acoustic Localization of Sneaking People
1 day, 11 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York