May 14, 2024, 4:47 a.m. | Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda

cs.CV updates on arXiv.org arxiv.org

arXiv:2305.17223v2 Announce Type: replace
Abstract: Due to increasing interest in adapting models on resource-constrained edges, parameter-efficient transfer learning has been widely explored. Among various methods, Visual Prompt Tuning (VPT), prepending learnable prompts to input space, shows competitive fine-tuning performance compared to training of full network parameters. However, VPT increases the number of input tokens, resulting in additional computational overhead. In this paper, we analyze the impact of the number of prompts on fine-tuning performance and self-attention operation in a vision …

abstract arxiv cs.ai cs.cv fine-tuning however network parameters performance prompt prompts prompt tuning replace shows space training transfer transfer learning type visual

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Sr Business Intelligence Analyst

@ T. Rowe Price | Baltimore, MD

Business Intelligence Analyst, Market Insights and Analytics

@ Morningstar | Mumbai

Senior Back-End Developer - Generative AI

@ Aptiv | POL Krakow - Eng

System Architect (Document AI)

@ Trafigura | London - Traf Office