March 5, 2024, 2:51 p.m. | Weihao Zeng, Keqing He, Yejie Wang, Dayuan Fu, Weiran Xu

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.01163v1 Announce Type: new
Abstract: Pre-trained language models have been successful in many scenarios. However, their usefulness in task-oriented dialogues is limited due to the intrinsic linguistic differences between general text and task-oriented dialogues. Current task-oriented dialogue pre-training methods rely on a contrastive framework, which faces challenges such as selecting true positives and hard negatives, as well as lacking diversity. In this paper, we propose a novel dialogue pre-training model called BootTOD. It learns task-oriented dialogue representations via a self-bootstrapping …

abstract arxiv bootstrap challenges cs.cl current dialogue differences diverse framework general intrinsic language language models pre-training responses text training true type

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Modeler

@ Sherwin-Williams | Cleveland, OH, United States