April 19, 2024, 4:47 a.m. | Nakyeong Yang, Junseok Kim, Jiwon Moon, Yunah Jang, Kyomin Jung

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.11916v1 Announce Type: new
Abstract: Prompt-tuning methods have shown comparable performance as parameter-efficient fine-tuning (PEFT) methods in various natural language understanding tasks. However, existing prompt tuning methods still utilize the entire model architecture; thus, they fail to accelerate inference speed in the application. In this paper, we propose a novel approach called SKIll-localized Prompt tuning (SKIP), which is extremely efficient in inference time. Our method significantly enhances inference efficiency by investigating and utilizing a skill-localized subnetwork in a language model. …

abstract application architecture arxiv boost cs.ai cs.cl fine-tuning however inference language language understanding natural natural language novel paper peft performance prompt prompt tuning speed tasks type understanding

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

C003549 Data Analyst (NS) - MON 13 May

@ EMW, Inc. | Braine-l'Alleud, Wallonia, Belgium

Marketing Decision Scientist

@ Meta | Menlo Park, CA | New York City