all AI news
Efficient Few-Shot Fine-Tuning for Opinion Summarization. (arXiv:2205.02170v1 [cs.CL])
Web: http://arxiv.org/abs/2205.02170
May 5, 2022, 1:12 a.m. | Arthur Bražinskas, Ramesh Nallapati, Mohit Bansal, Markus Dreyer
cs.LG updates on arXiv.org arxiv.org
Abstractive summarization models are typically pre-trained on large amounts
of generic texts, then fine-tuned on tens or hundreds of thousands of annotated
samples. However, in opinion summarization, large annotated datasets of reviews
paired with reference summaries are not available and would be expensive to
create. This calls for fine-tuning methods robust to overfitting on small
datasets. In addition, generically pre-trained models are often not accustomed
to the specifics of customer reviews and, after fine-tuning, yield summaries
with disfluencies and semantic …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Data & Insights Strategy & Innovation General Manager
@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX
Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis
@ Ahmedabad University | Ahmedabad, India
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC
Senior Data Science Writer
@ NannyML | Remote