Feb. 20, 2024, 5:52 a.m. | Ryuto Koike, Masahiro Kaneko, Naoaki Okazaki

cs.CL updates on arXiv.org arxiv.org

arXiv:2307.11729v3 Announce Type: replace
Abstract: Large Language Models (LLMs) have achieved human-level fluency in text generation, making it difficult to distinguish between human-written and LLM-generated texts. This poses a growing risk of misuse of LLMs and demands the development of detectors to identify LLM-generated texts. However, existing detectors lack robustness against attacks: they degrade detection accuracy by simply paraphrasing LLM-generated texts. Furthermore, a malicious user might attempt to deliberately evade the detectors based on detection results, but this has not …

abstract arxiv context cs.cl detection development essay examples generated human identify in-context learning language language models large language large language models llm llms making misuse risk text text generation through type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York