all AI news
Intelligent Learning Rate Distribution to reduce Catastrophic Forgetting in Transformers
April 3, 2024, 4:42 a.m. | Philip Kenneweg, Alexander Schulz, Sarah Schr\"oder, Barbara Hammer
cs.LG updates on arXiv.org arxiv.org
Abstract: Pretraining language models on large text corpora is a common practice in natural language processing. Fine-tuning of these models is then performed to achieve the best results on a variety of tasks. In this paper, we investigate the problem of catastrophic forgetting in transformer neural networks and question the common practice of fine-tuning with a flat learning rate for the entire network in this context. We perform a hyperparameter optimization process to find learning rate …
abstract arxiv catastrophic forgetting cs.ai cs.cl cs.lg distribution fine-tuning intelligent language language models language processing natural natural language natural language processing paper practice pretraining processing rate reduce results tasks text transformer transformers type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Machine Learning Engineer - Sr. Consultant level
@ Visa | Bellevue, WA, United States