April 16, 2024, 4:52 a.m. | Chengzhi Mao, Carl Vondrick, Hao Wang, Junfeng Yang

cs.CL updates on arXiv.org arxiv.org

arXiv:2401.12970v2 Announce Type: replace
Abstract: We find that large language models (LLMs) are more likely to modify human-written text than AI-generated text when tasked with rewriting. This tendency arises because LLMs often perceive AI-generated text as high-quality, leading to fewer modifications. We introduce a method to detect AI-generated content by prompting LLMs to rewrite text and calculating the editing distance of the output. We dubbed our geneRative AI Detection viA Rewriting method Raidar. Raidar significantly improves the F1 detection scores …

abstract ai detection ai-generated content ai-generated text arxiv cs.cl detection generated generative generative ai detection human language language models large language large language models llms prompting prompting llms quality text type via

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York