Web: https://www.reddit.com/r/LanguageTechnology/comments/ujjx5i/fine_tuned_pegasus_lost_creativity/

May 6, 2022, 9:55 a.m. | /u/Boglbert

Natural Language Processing reddit.com

Hi, I fine tuned a google/pegasus-xsum on a German datasets. The goal was to give "simpler" summarisations compared to the base model in terms of the words used in the summarisation.

Interestingly, so far the summarisations output are relatively on point but are extractive and not abstractive.

Is this an indication that the dataset was too broad (should it be more specific)? In general one could see the datasets as a simplification of wikipedia articles.

Has anyone had similar issues …

creativity languagetechnology

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California