Oct. 12, 2022, 1:18 a.m. | Shwai He, Liang Ding, Daize Dong, Miao Zhang, Dacheng Tao

cs.CL updates on arXiv.org arxiv.org

Adapter Tuning, which freezes the pretrained language models (PLMs) and only
fine-tunes a few extra modules, becomes an appealing efficient alternative to
the full model fine-tuning. Although computationally efficient, the recent
Adapters often increase parameters (e.g. bottleneck dimension) for matching the
performance of full model fine-tuning, which we argue goes against their
original intention. In this work, we re-examine the parameter-efficiency of
Adapters through the lens of network pruning (we name such plug-in concept as
\texttt{SparseAdapter}) and find that SparseAdapter …

arxiv easy efficiency

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States