March 28, 2024, 4:48 a.m. | Wissam Antoun, Beno\^it Sagot, Djam\'e Seddah

cs.CL updates on arXiv.org arxiv.org

arXiv:2309.13322v2 Announce Type: replace
Abstract: The widespread use of Large Language Models (LLMs), celebrated for their ability to generate human-like text, has raised concerns about misinformation and ethical implications. Addressing these concerns necessitates the development of robust methods to detect and attribute text generated by LLMs. This paper investigates "Cross-Model Detection," by evaluating whether a classifier trained to distinguish between source LLM-generated and human-written text can also detect text from a target LLM without further training. The study comprehensively explores …

abstract arxiv concerns cs.cl development ethical ethical implications generate generated human human-like language language model language models large language large language model large language models llms misinformation paper results robust text type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior ML Engineer

@ Carousell Group | Ho Chi Minh City, Vietnam

Data and Insight Analyst

@ Cotiviti | Remote, United States