March 15, 2024, 4:45 a.m. | Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.09377v1 Announce Type: new
Abstract: Mainstream parameter-efficient fine-tuning (PEFT) methods, such as LoRA or Adapter, project a model's hidden states to a lower dimension, allowing pre-trained models to adapt to new data through this low-rank bottleneck. However, PEFT tasks involving multiple modalities, like vision-language (VL) tasks, require not only adaptation to new data but also learning the relationship between different modalities. Targeting at VL PEFT tasks, we propose a family of operations, called routing functions, to enhance VL alignment in …

abstract adapt adapter arxiv bottlenecks cs.cv data fine-tuning functions hidden however language lora low multiple peft pre-trained models project routing tasks through type vision

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne