June 6, 2024, 4:43 a.m. | Noah Golowich, Ankur Moitra

cs.LG updates on arXiv.org arxiv.org

arXiv:2406.02633v1 Announce Type: cross
Abstract: Motivated by the problem of detecting AI-generated text, we consider the problem of watermarking the output of language models with provable guarantees. We aim for watermarks which satisfy: (a) undetectability, a cryptographic notion introduced by Christ, Gunn & Zamir (2024) which stipulates that it is computationally hard to distinguish watermarked language model outputs from the model's actual output distribution; and (b) robustness to channels which introduce a constant fraction of adversarial insertions, substitutions, and deletions …

abstract ai-generated text aim arxiv cs.ai cs.cr cs.lg edit generated language language models notion output problem robust text type watermarking watermarks

Senior Data Engineer

@ Displate | Warsaw

Principal Engineer - Platform Data Systems (HYBRID)

@ Stryker | Florida, Weston 3365 Enterprise Avenue

Senior Trajectory Optimization Analyst

@ The Aerospace Corporation | Colorado Springs

Spring 2025 Software Engineering Intern – Graduate

@ Garvan Institute of Medical Research | WA - Seattle

Computer Science / Engineering Student - Fall 2024 - Halifax

@ MDA Space | Halifax, Nova Scotia, Canada

Product Manager - Infrastructure AI

@ Meta | Menlo Park, CA