Oct. 13, 2022, 1:18 a.m. | Tao Yang, Jinghao Deng, Xiaojun Quan, Qifan Wang, Shaoliang Nie

cs.CL updates on arXiv.org arxiv.org

Fine-tuning large pre-trained language models on downstream tasks is apt to
suffer from overfitting when limited training data is available. While dropout
proves to be an effective antidote by randomly dropping a proportion of units,
existing research has not examined its effect on the self-attention mechanism.
In this paper, we investigate this problem through self-attention attribution
and find that dropping attention positions with low attribution scores can
accelerate training and increase the risk of overfitting. Motivated by this
observation, we …

arxiv attribution dropout fine-tuning language language model model fine-tuning

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne