all AI news
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters. (arXiv:2210.04284v3 [cs.CL] UPDATED)
Oct. 19, 2022, 1:17 a.m. | Shwai He, Liang Ding, Daize Dong, Miao Zhang, Dacheng Tao
cs.CL updates on arXiv.org arxiv.org
Adapter Tuning, which freezes the pretrained language models (PLMs) and only
fine-tunes a few extra modules, becomes an appealing efficient alternative to
the full model fine-tuning. Although computationally efficient, the recent
Adapters often increase parameters (e.g. bottleneck dimension) for matching the
performance of full model fine-tuning, which we argue goes against their
original intention. In this work, we re-examine the parameter-efficiency of
Adapters through the lens of network pruning (we name such plug-in concept as
\texttt{SparseAdapter}) and find that SparseAdapter …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Technology Consultant Master Data Management (w/m/d)
@ SAP | Walldorf, DE, 69190
Research Engineer, Computer Vision, Google Research
@ Google | Nairobi, Kenya