May 10, 2024, 4:42 a.m. | Xikang Yang, Xuehai Tang, Songlin Hu, Jizhong Han

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.05610v1 Announce Type: cross
Abstract: Large language models (LLMs) have achieved remarkable performance in various natural language processing tasks, especially in dialogue systems. However, LLM may also pose security and moral threats, especially in multi round conversations where large models are more easily guided by contextual content, resulting in harmful or biased responses. In this paper, we present a novel method to attack LLMs in multi-turn dialogues, called CoA (Chain of Attack). CoA is a semantic-driven contextual multi-turn attack method …

abstract arxiv conversations cs.cl cs.cr cs.lg dialogue however language language models language processing large language large language models large models llm llms natural natural language natural language processing performance processing security semantic systems tasks threats type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US