March 27, 2024, 4:47 a.m. | Yijin Huang, Pujin Cheng, Roger Tam, Xiaoying Tang

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.07576v2 Announce Type: replace
Abstract: Parameter-efficient fine-tuning (PEFT) is proposed as a cost-effective way to transfer pre-trained models to downstream tasks, avoiding the high cost of updating entire large-scale pre-trained models (LPMs). In this work, we present Fine-grained Prompt Tuning (FPT), a novel PEFT method for medical image classification. FPT significantly reduces memory consumption compared to other PEFT methods, especially in high-resolution contexts. To achieve this, we first freeze the weights of the LPM and construct a learnable lightweight side …

abstract arxiv classification cost cs.cv fine-grained fine-tuning image medical memory novel peft pre-trained models prompt prompt tuning resolution scale tasks transfer type work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead Data Engineer

@ WorkMoney | New York City, United States - Remote