all AI news
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Feb. 20, 2024, 5:45 a.m. | Zhengxiang Shi, Aldo Lipani
cs.LG updates on arXiv.org arxiv.org
Abstract: Prompt tuning (PT), where a small amount of trainable soft (continuous) prompt vectors is affixed to the input of language models (LM), has shown promising results across various tasks and models for parameter-efficient fine-tuning (PEFT). PT stands out from other PEFT approaches because it maintains competitive performance with fewer trainable parameters and does not drastically scale up its parameters as the model size expands. However, PT introduces additional soft prompt tokens, leading to longer input …
abstract arxiv continuous cs.ai cs.cl cs.cv cs.lg fine-tuning language language models peft performance prompt prompt tuning small tasks type vectors
More from arxiv.org / cs.LG updates on arXiv.org
Trainwreck: A damaging adversarial attack on image classifiers
1 day, 13 hours ago |
arxiv.org
Fast Controllable Diffusion Models for Undersampled MRI Reconstruction
1 day, 13 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
Software Engineer III -Full Stack Developer - ModelOps, MLOps
@ JPMorgan Chase & Co. | NY, United States
Senior Lead Software Engineer - Full Stack Senior Developer - ModelOps, MLOps
@ JPMorgan Chase & Co. | NY, United States
Software Engineer III - Full Stack Developer - ModelOps, MLOps
@ JPMorgan Chase & Co. | NY, United States
Research Scientist (m/w/d) - Numerische Simulation Laser-Materie-Wechselwirkung
@ Fraunhofer-Gesellschaft | Freiburg, DE, 79104
Research Scientist, Speech Real-Time Dialog
@ Google | Mountain View, CA, USA