April 2, 2024, 7:51 p.m. | Zhenhua Liu, Tong Zhu, Jianxiang Xiang, Wenliang Chen

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.00361v1 Announce Type: new
Abstract: Data augmentation (DA) is crucial to mitigate model training instability and over-fitting problems in low-resource open-domain dialogue generation. However, traditional DA methods often neglect semantic data diversity, restricting the overall quality. Recently, large language models (LLM) have been used for DA to generate diversified dialogues. However, they have limited controllability and tend to generate dialogues with a distribution shift compared to the seed dialogues. To maximize the augmentation diversity and address the controllability problem, we …

abstract arxiv augmentation cs.cl data data diversity dialogue diverse diversity domain however language language model language models large language large language model large language models llm low quality semantic training type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain