Nov. 5, 2023, 6:47 a.m. | Chi Seng Cheang, Hou Pong Chan, Derek F. Wong, Xuebo Liu, Zhaocong Li, Yanming Sun, Shudong Liu, Lidia S. Chao

cs.CL updates on arXiv.org arxiv.org

Recent pre-trained language models (PLMs) achieve promising results in
existing abstractive summarization datasets. However, existing summarization
benchmarks overlap in time with the standard pre-training corpora and
finetuning datasets. Hence, the strong performance of PLMs may rely on the
parametric knowledge that is memorized during pre-training and fine-tuning.
Moreover, the knowledge memorized by PLMs may quickly become outdated, which
affects the generalization performance of PLMs on future data. In this work, we
propose TempoSum, a novel benchmark that contains data samples …

analysis arxiv benchmarks data datasets finetuning future knowledge language language models parametric performance pre-training standard summarization text text summarization training

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA