May 3, 2024, 4:15 a.m. | Minhao Bai, Kaiyi Pang, Yongfeng Huang

cs.CL updates on arXiv.org arxiv.org

arXiv:2405.01509v1 Announce Type: cross
Abstract: In the rapidly evolving domain of artificial intelligence, safeguarding the intellectual property of Large Language Models (LLMs) is increasingly crucial. Current watermarking techniques against model extraction attacks, which rely on signal insertion in model logits or post-processing of generated text, remain largely heuristic. We propose a novel method for embedding learnable linguistic watermarks in LLMs, aimed at tracing and preventing model extraction attacks. Our approach subtly modifies the LLM's output distribution by introducing controlled noise …

abstract artificial artificial intelligence arxiv attacks cs.ai cs.cl cs.cr current domain extraction generated intellectual property intelligence language language models large language large language models llms post-processing processing property signal text tracing type watermarking watermarks

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US