Web: http://arxiv.org/abs/2112.08313

May 11, 2022, 1:11 a.m. | Xuezhi Wang, Haohan Wang, Diyi Yang

cs.CL updates on arXiv.org arxiv.org

As NLP models achieved state-of-the-art performances over benchmarks and
gained wide applications, it has been increasingly important to ensure the safe
deployment of these models in the real world, e.g., making sure the models are
robust against unseen or challenging scenarios. Despite robustness being an
increasingly studied topic, it has been separately explored in applications
like vision and NLP, with various definitions, evaluation and mitigation
strategies in multiple lines of research. In this paper, we aim to provide a
unifying …

arxiv models nlp robustness survey

More from arxiv.org / cs.CL updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California