Feb. 2, 2024, 9:40 p.m. | Shengchao Liu Xiaoming Liu Yichen Wang Zehua Cheng Chengzhengxu Li Zhaohan Zhang Yu Lan Chao S

cs.CL updates on arXiv.org arxiv.org

The burgeoning capabilities of large language models (LLMs) have raised growing concerns about abuse. DetectGPT, a zero-shot metric-based unsupervised machine-generated text detector, first introduces perturbation and shows great performance improvement. However, DetectGPT's random perturbation strategy might introduce noise, limiting the distinguishability and further performance improvements. Moreover, its logit regression module relies on setting the threshold, which harms the generalizability and applicability of individual or small-batch inputs. Hence, we propose a novel detector, \modelname{}, which uses selective strategy perturbation to relieve …

abuse capabilities concerns cs.cl generated improvement improvements language language models large language large language models llms machine noise performance random shows strategy text unsupervised zero-shot

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US