all AI news
Self-Training with Purpose Preserving Augmentation Improves Few-shot Generative Dialogue State Tracking. (arXiv:2211.09379v1 [cs.CL])
Nov. 18, 2022, 2:15 a.m. | Jihyun Lee, Chaebin Lee, Yunsu Kim, Gary Geunbae Lee
cs.CL updates on arXiv.org arxiv.org
In dialogue state tracking (DST), labeling the dataset involves considerable
human labor. We propose a new self-training framework for few-shot generative
DST that utilize unlabeled data. Our self-training method iteratively improves
the model by pseudo labeling and employs Purpose Preserving Augmentation
(PPAug) to prevent overfitting. We increaese the few-shot 10% performance by
approximately 4% on MultiWOZ 2.1 and enhances the slot-recall 8.34% for unseen
values compared to baseline.
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Engineer
@ Quantexa | Sydney, New South Wales, Australia
Staff Analytics Engineer
@ Warner Bros. Discovery | NY New York 230 Park Avenue South