Oct. 13, 2022, 1:18 a.m. | Wanyu Du, Hanjie Chen, Yangfeng Ji

cs.CL updates on arXiv.org arxiv.org

In task-oriented dialogue systems, response generation from meaning
representations (MRs) often suffers from limited training examples, due to the
high cost of annotating MR-to-Text pairs. Previous works on self-training
leverage fine-tuned conversational models to automatically generate
pseudo-labeled MR-to-Text pairs for further fine-tuning. However, some
self-augmented data may be noisy or uninformative for the model to learn from.
In this work, we propose a two-phase self-augmentation procedure to generate
high-quality pseudo-labeled MR-to-Text pairs: the first phase selects the most
informative MRs …

arxiv augmentation self-training training

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA