all AI news
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters. (arXiv:2210.04284v5 [cs.CL] UPDATED)
Nov. 11, 2022, 2:16 a.m. | Shwai He, Liang Ding, Daize Dong, Miao Zhang, Dacheng Tao
cs.CL updates on arXiv.org arxiv.org
Adapter Tuning, which freezes the pretrained language models (PLMs) and only
fine-tunes a few extra modules, becomes an appealing efficient alternative to
the full model fine-tuning. Although computationally efficient, the recent
Adapters often increase parameters (e.g. bottleneck dimension) for matching the
performance of full model fine-tuning, which we argue goes against their
original intention. In this work, we re-examine the parameter-efficiency of
Adapters through the lens of network pruning (we name such plug-in concept as
\texttt{SparseAdapter}) and find that SparseAdapter …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
IT Commercial Data Analyst - ESO
@ National Grid | Warwick, GB, CV34 6DA
Stagiaire Data Analyst – Banque Privée - Juillet 2024
@ Rothschild & Co | Paris (Messine-29)
Operations Research Scientist I - Network Optimization Focus
@ CSX | Jacksonville, FL, United States
Machine Learning Operations Engineer
@ Intellectsoft | Baku, Baku, Azerbaijan - Remote
Data Analyst
@ Health Care Service Corporation | Richardson Texas HQ (1001 E. Lookout Drive)