all AI news
ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts
June 19, 2024, 2:44 a.m. | Samar Khanna, Medhanie Irgau, David B. Lobell, Stefano Ermon
cs.CV updates on arXiv.org arxiv.org
Abstract: Parameter-efficient fine-tuning (PEFT) techniques such as low-rank adaptation (LoRA) can effectively adapt large pre-trained foundation models to downstream tasks using only a small fraction (0.1%-10%) of the original trainable weights. An under-explored question of PEFT is in extending the pre-training phase without supervised labels; that is, can we adapt a pre-trained foundation model to a new domain via efficient self-supervised pre-training on this new domain? In this work, we introduce ExPLoRA, a highly effective technique …
abstract adapt arxiv cs.ai cs.cv domain fine-tuning foundation labels lora low low-rank adaptation peft pre-training question small tasks training transformers tuning type vision vision transformers
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Quality Specialist - JAVA
@ SAP | Bengaluru, IN, 560066
Aktuar Financial Lines (m/w/d)
@ Zurich Insurance | Köln, DE
Senior Network Engineer
@ ManTech | 054H - 124TchnlgyPrkWy,SBurlington,VT
Pricing Analyst
@ EDF | Exeter, GB
Specialist IS Engineer
@ Amgen | US - California - Thousand Oaks - Field/Remote