all AI news
Backdoor Attacks on Dense Passage Retrievers for Disseminating Misinformation
Feb. 22, 2024, 5:47 a.m. | Quanyu Long, Yue Deng, LeiLei Gan, Wenya Wang, Sinno Jialin Pan
cs.CL updates on arXiv.org arxiv.org
Abstract: Dense retrievers and retrieval-augmented language models have been widely used in various NLP applications. Despite being designed to deliver reliable and secure outcomes, the vulnerability of retrievers to potential attacks remains unclear, raising concerns about their security. In this paper, we introduce a novel scenario where the attackers aim to covertly disseminate targeted misinformation, such as hate speech or advertisement, through a retrieval system. To achieve this, we propose a perilous backdoor attack triggered by …
abstract applications arxiv attacks backdoor concerns cs.cl language language models misinformation nlp novel paper retrieval retrieval-augmented security type vulnerability
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 11 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 11 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN