all AI news
Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning. (arXiv:2210.10041v2 [cs.CL] UPDATED)
Oct. 20, 2022, 1:13 a.m. | Shuo Xie, Jiahao Qiu, Ankita Pasad, Li Du, Qing Qu, Hongyuan Mei
cs.LG updates on arXiv.org arxiv.org
While transferring a pretrained language model, common approaches
conventionally attach their task-specific classifiers to the top layer and
adapt all the pretrained layers. We investigate whether one could make a
task-specific selection on which subset of the layers to adapt and where to
place the classifier. The goal is to reduce the computation cost of transfer
learning methods (e.g. fine-tuning or adapter-tuning) without sacrificing its
performance.
We propose to select layers based on the variability of their hidden states
given …
arxiv computation language language models state transfer transfer learning
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior AI & Data Engineer
@ Bertelsmann | Kuala Lumpur, 14, MY, 50400
Analytics Engineer
@ Reverse Tech | Philippines - Remote