Feb. 27, 2024, 5:50 a.m. | Massieh Kordi Boroujeny, Ya Jiang, Kai Zeng, Brian Mark

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.16578v1 Announce Type: new
Abstract: Methods for watermarking large language models have been proposed that distinguish AI-generated text from human-generated text by slightly altering the model output distribution, but they also distort the quality of the text, exposing the watermark to adversarial detection. More recently, distortion-free watermarking methods were proposed that require a secret key to detect the watermark. The prior methods generally embed zero-bit watermarks that do not provide additional information beyond tagging a text as being AI-generated. We …

abstract adversarial ai-generated text arxiv cs.cl cs.lg detection distribution free generated human language language models large language large language models quality text type watermark watermarking

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote