all AI news
Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning. (arXiv:2210.10041v2 [cs.CL] UPDATED)
Oct. 20, 2022, 1:17 a.m. | Shuo Xie, Jiahao Qiu, Ankita Pasad, Li Du, Qing Qu, Hongyuan Mei
cs.CL updates on arXiv.org arxiv.org
While transferring a pretrained language model, common approaches
conventionally attach their task-specific classifiers to the top layer and
adapt all the pretrained layers. We investigate whether one could make a
task-specific selection on which subset of the layers to adapt and where to
place the classifier. The goal is to reduce the computation cost of transfer
learning methods (e.g. fine-tuning or adapter-tuning) without sacrificing its
performance.
We propose to select layers based on the variability of their hidden states
given …
arxiv computation language language models state transfer transfer learning
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
Enterprise Data Architect
@ Pathward | Remote
Diagnostic Imaging Information Systems (DIIS) Technologist
@ Nova Scotia Health Authority | Halifax, NS, CA, B3K 6R8
Intern Data Scientist - Residual Value Risk Management (f/m/d)
@ BMW Group | Munich, DE
Analytics Engineering Manager
@ PlayStation Global | United Kingdom, London
Junior Insight Analyst (PR&Comms)
@ Signal AI | Lisbon, Lisbon, Portugal