Web: http://arxiv.org/abs/2206.07382

June 16, 2022, 1:12 a.m. | Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, Maosong Sun

cs.CL updates on arXiv.org arxiv.org

Adapting large pre-trained models (PTMs) through fine-tuning imposes
prohibitive computational and storage burdens. Recent studies of
parameter-efficient tuning (PET) find that only optimizing a small portion of
parameters conditioned on PTMs could yield on-par performance compared to
conventional fine-tuning. Generally, PET methods exquisitely design
parameter-efficient modules (PET modules) which could be applied to arbitrary
fine-grained positions inside PTMs. However, the effectiveness of these
fine-grained positions largely relies on sophisticated manual designation,
thereby usually producing sub-optimal results. In contrast to the …

arxiv search

More from arxiv.org / cs.CL updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY