Oct. 24, 2022, 1:17 a.m. | Shwai He, Liang Ding, Daize Dong, Miao Zhang, Dacheng Tao

cs.CL updates on arXiv.org arxiv.org

Adapter Tuning, which freezes the pretrained language models (PLMs) and only
fine-tunes a few extra modules, becomes an appealing efficient alternative to
the full model fine-tuning. Although computationally efficient, the recent
Adapters often increase parameters (e.g. bottleneck dimension) for matching the
performance of full model fine-tuning, which we argue goes against their
original intention. In this work, we re-examine the parameter-efficiency of
Adapters through the lens of network pruning (we name such plug-in concept as
\texttt{SparseAdapter}) and find that SparseAdapter …

arxiv easy efficiency

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Lead Software Engineer - Artificial Intelligence, LLM

@ OpenText | Hyderabad, TG, IN

Lead Software Engineer- Python Data Engineer

@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom

Data Analyst (m/w/d)

@ Collaboration Betters The World | Berlin, Germany

Data Engineer, Quality Assurance

@ Informa Group Plc. | Boulder, CO, United States

Director, Data Science - Marketing

@ Dropbox | Remote - Canada