all AI news
Self-Training with Purpose Preserving Augmentation Improves Few-shot Generative Dialogue State Tracking. (arXiv:2211.09379v1 [cs.CL])
Nov. 18, 2022, 2:15 a.m. | Jihyun Lee, Chaebin Lee, Yunsu Kim, Gary Geunbae Lee
cs.CL updates on arXiv.org arxiv.org
In dialogue state tracking (DST), labeling the dataset involves considerable
human labor. We propose a new self-training framework for few-shot generative
DST that utilize unlabeled data. Our self-training method iteratively improves
the model by pseudo labeling and employs Purpose Preserving Augmentation
(PPAug) to prevent overfitting. We increaese the few-shot 10% performance by
approximately 4% on MultiWOZ 2.1 and enhances the slot-recall 8.34% for unseen
values compared to baseline.
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Technical Program Manager, Expert AI Trainer Acquisition & Engagement
@ OpenAI | San Francisco, CA
Director, Data Engineering
@ PatientPoint | Cincinnati, Ohio, United States