Oct. 19, 2022, 1:12 a.m. | Shuo Xie, Jiahao Qiu, Ankita Pasad, Li Du, Qing Qu, Hongyuan Mei

cs.LG updates on arXiv.org arxiv.org

While transferring a pretrained language model, common approaches
conventionally attach their task-specific classifiers to the top layer and
adapt all the pretrained layers. We investigate whether one could make a
task-specific selection on which subset of the layers to adapt and where to
place the classifier. The goal is to reduce the computation cost of transfer
learning methods (e.g. fine-tuning or adapter-tuning) without sacrificing its
performance.


We propose to select layers based on the variability of their hidden states
given …

arxiv computation language language models state transfer transfer learning

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Engineer, Deep Learning

@ Outrider | Remote

Data Analyst (Bangkok based, relocation provided)

@ Agoda | Bangkok (Central World Office)

Data Scientist II

@ MoEngage | Bengaluru

Machine Learning Engineer

@ Sika AG | Welwyn Garden City, United Kingdom