all AI news
Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation
March 19, 2024, 4:49 a.m. | Wangbo Zhao, Jiasheng Tang, Yizeng Han, Yibing Song, Kai Wang, Gao Huang, Fan Wang, Yang You
cs.CV updates on arXiv.org arxiv.org
Abstract: Existing parameter-efficient fine-tuning (PEFT) methods have achieved significant success on vision transformers (ViTs) adaptation by improving parameter efficiency. However, the exploration of enhancing inference efficiency during adaptation remains underexplored. This limits the broader application of pre-trained ViT models, especially when the model is computationally extensive. In this paper, we propose Dynamic Tuning (DyT), a novel approach to improve both parameter and inference efficiency for ViT adaptation. Specifically, besides using the lightweight adapter modules, we propose …
abstract application arxiv cs.cv dynamic efficiency exploration fine-tuning however inference peft success transformers type vision vision transformers vit
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
Customer Data Analyst with Spanish
@ Michelin | Voluntari
HC Data Analyst - Senior
@ Leidos | 1662 Intelligence Community Campus - Bethesda MD
Healthcare Research & Data Analyst- Infectious, Niche, Rare Disease
@ Clarivate | Remote (121- Massachusetts)
Data Analyst (maternity leave cover)
@ Clarivate | R155-Belgrade
Sales Enablement Data Analyst (Remote)
@ CrowdStrike | USA TX Remote