May 14, 2024, 4:50 a.m. | Haoran Li, Yulin Chen, Zihao Zheng, Qi Hu, Chunkit Chan, Heshan Liu, Yangqiu Song

cs.CL updates on arXiv.org arxiv.org

arXiv:2405.07667v1 Announce Type: cross
Abstract: With rapid advances, generative large language models (LLMs) dominate various Natural Language Processing (NLP) tasks from understanding to reasoning. Yet, language models' inherent vulnerabilities may be exacerbated due to increased accessibility and unrestricted model training on massive textual data from the Internet. A malicious adversary may publish poisoned data online and conduct backdoor attacks on the victim LLMs pre-trained on the poisoned data. Backdoored LLMs behave innocuously for normal queries and generate harmful responses when …

abstract accessibility advances arxiv backdoor cs.cl cs.cr data generative internet language language models language processing large language large language models llms massive natural natural language natural language processing nlp processing reasoning tasks textual training type understanding vulnerabilities

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

Sr. Data Operations

@ Carousell Group | West Jakarta, Indonesia

Senior Analyst, Business Intelligence & Reporting

@ Deutsche Bank | Bucharest

Business Intelligence Subject Matter Expert (SME) - Assistant Vice President

@ Deutsche Bank | Cary, 3000 CentreGreen Way

Enterprise Business Intelligence Specialist

@ NAIC | Kansas City

Senior Business Intelligence (BI) Developer - Associate

@ Deutsche Bank | Cary, 3000 CentreGreen Way