Oct. 19, 2022, 1:17 a.m. | Shuo Xie, Jiahao Qiu, Ankita Pasad, Li Du, Qing Qu, Hongyuan Mei

cs.CL updates on arXiv.org arxiv.org

While transferring a pretrained language model, common approaches
conventionally attach their task-specific classifiers to the top layer and
adapt all the pretrained layers. We investigate whether one could make a
task-specific selection on which subset of the layers to adapt and where to
place the classifier. The goal is to reduce the computation cost of transfer
learning methods (e.g. fine-tuning or adapter-tuning) without sacrificing its
performance.


We propose to select layers based on the variability of their hidden states
given …

arxiv computation language language models state transfer transfer learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru