all AI news
Edit Distance Robust Watermarks for Language Models
June 6, 2024, 4:43 a.m. | Noah Golowich, Ankur Moitra
cs.LG updates on arXiv.org arxiv.org
Abstract: Motivated by the problem of detecting AI-generated text, we consider the problem of watermarking the output of language models with provable guarantees. We aim for watermarks which satisfy: (a) undetectability, a cryptographic notion introduced by Christ, Gunn & Zamir (2024) which stipulates that it is computationally hard to distinguish watermarked language model outputs from the model's actual output distribution; and (b) robustness to channels which introduce a constant fraction of adversarial insertions, substitutions, and deletions …
abstract ai-generated text aim arxiv cs.ai cs.cr cs.lg edit generated language language models notion output problem robust text type watermarking watermarks
More from arxiv.org / cs.LG updates on arXiv.org
Plasma Surrogate Modelling using Fourier Neural Operators
1 day, 10 hours ago |
arxiv.org
Accelerating optimization over the space of probability measures
1 day, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Data Engineer
@ Displate | Warsaw
Principal Engineer - Platform Data Systems (HYBRID)
@ Stryker | Florida, Weston 3365 Enterprise Avenue
Senior Trajectory Optimization Analyst
@ The Aerospace Corporation | Colorado Springs
Spring 2025 Software Engineering Intern – Graduate
@ Garvan Institute of Medical Research | WA - Seattle
Computer Science / Engineering Student - Fall 2024 - Halifax
@ MDA Space | Halifax, Nova Scotia, Canada
Product Manager - Infrastructure AI
@ Meta | Menlo Park, CA