Web: http://arxiv.org/abs/2205.01714

May 5, 2022, 1:11 a.m. | Jonathan Rusert, Padmini Srinivasan

cs.LG updates on arXiv.org arxiv.org

Deep learning (DL) is being used extensively for text classification.
However, researchers have demonstrated the vulnerability of such classifiers to
adversarial attacks. Attackers modify the text in a way which misleads the
classifier while keeping the original meaning close to intact. State-of-the-art
(SOTA) attack algorithms follow the general principle of making minimal changes
to the text so as to not jeopardize semantics. Taking advantage of this we
propose a novel and intuitive defense strategy called Sample Shielding. It is
attacker …

arxiv attacks rest small text

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California