Web: http://arxiv.org/abs/2206.08514

June 20, 2022, 1:12 a.m. | Ganqu Cui, Lifan Yuan, Bingxiang He, Yangyi Chen, Zhiyuan Liu, Maosong Sun

cs.CL updates on arXiv.org arxiv.org

Textual backdoor attacks are a kind of practical threat to NLP systems. By
injecting a backdoor in the training phase, the adversary could control model
predictions via predefined triggers. As various attack and defense models have
been proposed, it is of great significance to perform rigorous evaluations.
However, we highlight two issues in previous backdoor learning evaluations: (1)
The differences between real-world scenarios (e.g. releasing poisoned datasets
or models) are neglected, and we argue that each scenario has its own …

arxiv benchmarks evaluation frameworks learning lg

More from arxiv.org / cs.CL updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY