Aug. 10, 2022, 1:11 a.m. | Michael Heck, Nurul Lubis, Carel van Niekerk, Shutong Feng, Christian Geishauser, Hsien-Chin Lin, Milica Gašić

cs.CL updates on arXiv.org arxiv.org

Generalising dialogue state tracking (DST) to new data is especially
challenging due to the strong reliance on abundant and fine-grained supervision
during training. Sample sparsity, distributional shift and the occurrence of
new concepts and topics frequently lead to severe performance degradation
during inference. In this paper we propose a training strategy to build
extractive DST models without the need for fine-grained manual span labels. Two
novel input-level dropout methods mitigate the negative impact of sample
sparsity. We propose a new …

arxiv data state tracking

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote