May 8, 2023, 12:45 a.m. | Yufei Li, Zexin Li, Yingfan Gao, Cong Liu

cs.CL updates on arXiv.org arxiv.org

Pre-trained transformers are popular in state-of-the-art dialogue generation
(DG) systems. Such language models are, however, vulnerable to various
adversarial samples as studied in traditional tasks such as text
classification, which inspires our curiosity about their robustness in DG
systems. One main challenge of attacking DG models is that perturbations on the
current sentence can hardly degrade the response accuracy because the unchanged
chat histories are also considered for decision-making. Instead of merely
pursuing pitfalls of performance metrics such as BLEU, …

art arxiv box challenge classification curiosity dialogue language language models popular robustness state systems text text classification transformers vulnerable

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC