Feb. 26, 2024, 5:42 a.m. | Tom Sander, Pierre Fernandez, Alain Durmus, Matthijs Douze, Teddy Furon

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.14904v1 Announce Type: cross
Abstract: This paper investigates the radioactivity of LLM-generated texts, i.e. whether it is possible to detect that such input was used as training data. Conventional methods like membership inference can carry out this detection with some level of accuracy. We show that watermarked training data leaves traces easier to detect and much more reliable than membership inference. We link the contamination level to the watermark robustness, its proportion in the training set, and the fine-tuning process. …

abstract accuracy arxiv cs.ai cs.cl cs.cr cs.lg data detection generated inference language language models llm paper show traces training training data type watermarking

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

.NET Software Engineer (AI Focus)

@ Boskalis | Papendrecht, Netherlands