all AI news
To Adapt or to Fine-tune: A Case Study on Abstractive Summarization. (arXiv:2208.14559v1 [cs.CL])
Sept. 1, 2022, 1:13 a.m. | Zheng Zhao, Pinzhen Chen
cs.CL updates on arXiv.org arxiv.org
Recent advances in the field of abstractive summarization leverage
pre-trained language models rather than train a model from scratch. However,
such models are sluggish to train and accompanied by a massive overhead.
Researchers have proposed a few lightweight alternatives such as smaller
adapters to mitigate the drawbacks. Nonetheless, it remains uncertain whether
using adapters benefits the task of summarization, in terms of improved
efficiency without an unpleasant sacrifice in performance. In this work, we
carry out multifaceted investigations on fine-tuning …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer
@ Parker | New York City
Sr. Data Analyst | Home Solutions
@ Three Ships | Raleigh or Charlotte, NC