June 3, 2024, 4:43 a.m. | Gloaguen Thibaud, Jovanovi\'c Nikola, Staab Robin, Vechev Martin

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.20777v1 Announce Type: cross
Abstract: Watermarking has emerged as a promising way to detect LLM-generated text. To apply a watermark an LLM provider, given a secret key, augments generations with a signal that is later detectable by any party with the same key. Recent work has proposed three main families of watermarking schemes, two of which focus on the property of preserving the LLM distribution. This is motivated by it being a tractable proxy for maintaining LLM capabilities, but also …

abstract apply arxiv box cs.cr cs.lg detection families generated key language language model later llm provider secret signal text type watermark watermarking watermarks work

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Data Engineer

@ Displate | Warsaw

Solutions Architect

@ PwC | Bucharest - 1A Poligrafiei Boulevard

Research Fellow (Social and Cognition Factors, CLIC)

@ Nanyang Technological University | NTU Main Campus, Singapore

Research Aide - Research Aide I - Department of Psychology

@ Cornell University | Ithaca (Main Campus)

Technical Architect - SMB/Desk

@ Salesforce | Ireland - Dublin