all AI news
BootTOD: Bootstrap Task-oriented Dialogue Representations by Aligning Diverse Responses
March 5, 2024, 2:51 p.m. | Weihao Zeng, Keqing He, Yejie Wang, Dayuan Fu, Weiran Xu
cs.CL updates on arXiv.org arxiv.org
Abstract: Pre-trained language models have been successful in many scenarios. However, their usefulness in task-oriented dialogues is limited due to the intrinsic linguistic differences between general text and task-oriented dialogues. Current task-oriented dialogue pre-training methods rely on a contrastive framework, which faces challenges such as selecting true positives and hard negatives, as well as lacking diversity. In this paper, we propose a novel dialogue pre-training model called BootTOD. It learns task-oriented dialogue representations via a self-bootstrapping …
abstract arxiv bootstrap challenges cs.cl current dialogue differences diverse framework general intrinsic language language models pre-training responses text training true type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Lead Data Modeler
@ Sherwin-Williams | Cleveland, OH, United States