Feb. 22, 2024, 5:47 a.m. | Quanyu Long, Yue Deng, LeiLei Gan, Wenya Wang, Sinno Jialin Pan

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.13532v1 Announce Type: new
Abstract: Dense retrievers and retrieval-augmented language models have been widely used in various NLP applications. Despite being designed to deliver reliable and secure outcomes, the vulnerability of retrievers to potential attacks remains unclear, raising concerns about their security. In this paper, we introduce a novel scenario where the attackers aim to covertly disseminate targeted misinformation, such as hate speech or advertisement, through a retrieval system. To achieve this, we propose a perilous backdoor attack triggered by …

abstract applications arxiv attacks backdoor concerns cs.cl language language models misinformation nlp novel paper retrieval retrieval-augmented security type vulnerability

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN