all AI news
RESSA: Repair Sparse Vision-Language Models via Sparse Cross-Modality Adaptation
April 4, 2024, 4:41 a.m. | Shwai He, Tianlong Chen
cs.LG updates on arXiv.org arxiv.org
Abstract: Vision-Language Models (VLMs), integrating diverse information from multiple modalities, have shown remarkable success across various tasks. However, deploying VLMs, comprising large-scale vision and language models poses challenges in resource-constrained scenarios. While pruning followed by finetuning offers a potential solution to maintain performance with smaller model sizes, its application to VLMs remains relatively unexplored, presenting two main questions: how to distribute sparsity across different modality-specific models, and how to repair the performance of pruned sparse VLMs. …
abstract arxiv challenges cs.cv cs.lg diverse finetuning however information language language models multiple performance pruning repair scale solution success tasks type via vision vision-language models vlms
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Codec Avatars Research Engineer
@ Meta | Pittsburgh, PA