Feb. 26, 2024, 5:49 a.m. | Ivan Vykopal, Mat\'u\v{s} Pikuliak, Ivan Srba, Robert Moro, Dominik Macko, Maria Bielikova

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.08838v2 Announce Type: replace
Abstract: Automated disinformation generation is often listed as an important risk associated with large language models (LLMs). The theoretical ability to flood the information space with disinformation content might have dramatic consequences for societies around the world. This paper presents a comprehensive study of the disinformation capabilities of the current generation of LLMs to generate false news articles in the English language. In our study, we evaluated the capabilities of 10 LLMs using 20 disinformation narratives. …

abstract arxiv automated capabilities consequences cs.cl current disinformation flood information language language models large language large language models llms paper risk space study the information type world

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne