all AI news
Dial-MAE: ConTextual Masked Auto-Encoder for Retrieval-based Dialogue Systems
March 26, 2024, 4:51 a.m. | Zhenpeng Su, Xing Wu, Wei Zhou, Guangyuan Ma, Songlin Hu
cs.CL updates on arXiv.org arxiv.org
Abstract: Dialogue response selection aims to select an appropriate response from several candidates based on a given user and system utterance history. Most existing works primarily focus on post-training and fine-tuning tailored for cross-encoders. However, there are no post-training methods tailored for dense encoders in dialogue response selection. We argue that when the current language model, based on dense dialogue systems (such as BERT), is employed as a dense encoder, it separately encodes dialogue context and …
abstract arxiv auto cs.ai cs.cl dialogue encoder fine-tuning focus history however retrieval systems training type
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
2 days, 11 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
2 days, 11 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist, Demography and Survey Science, University Grad
@ Meta | Menlo Park, CA | New York City
Computer Vision Engineer, XR
@ Meta | Burlingame, CA