Feb. 21, 2024, 5:48 a.m. | Jiayi Fu, Xuandong Zhao, Ruihan Yang, Yuansen Zhang, Jiangjie Chen, Yanghua Xiao

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.12948v1 Announce Type: new
Abstract: Large language models (LLMs) excellently generate human-like text, but also raise concerns about misuse in fake news and academic dishonesty. Decoding-based watermark, particularly the GumbelMax-trick-based watermark(GM watermark), is a standout solution for safeguarding machine-generated texts due to its notable detectability. However, GM watermark encounters a major challenge with generation diversity, always yielding identical outputs for the same prompt, negatively impacting generation diversity and user experience. To overcome this limitation, we propose a new type of …

abstract academic arxiv challenge concerns cs.cl decoding fake fake news generate generated human human-like language language model language models large language large language models llms machine major misuse raise solution text trick type via watermark watermarking

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Data Scientist, Mid

@ Booz Allen Hamilton | DEU, Stuttgart (Kurmaecker St)

Tech Excellence Data Scientist

@ Booz Allen Hamilton | Undisclosed Location - USA, VA, Mclean