April 5, 2024, 4:47 a.m. | Chen Cecilia Liu, Jonas Pfeiffer, Ivan Vuli\'c, Iryna Gurevych

cs.CL updates on arXiv.org arxiv.org

arXiv:2301.05487v2 Announce Type: replace
Abstract: Standard fine-tuning of language models typically performs well on in-distribution data, but suffers with generalization to distribution shifts. In this work, we aim to improve the generalization of adapter-based cross-lingual task transfer where such cross-language distribution shifts are imminent. We investigate scheduled unfreezing algorithms -- originally proposed to mitigate catastrophic forgetting in transfer learning -- for fine-tuning task adapters. Our experiments show that scheduled unfreezing methods close the gap to full fine-tuning and achieve stronger …

abstract adapter aim arxiv cross-lingual cs.cl data distribution fine-tuning fisher fun improving language language models standard transfer type work

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States