March 19, 2024, 4:45 a.m. | Canyu Chen, Kai Shu

cs.LG updates on arXiv.org arxiv.org

arXiv:2309.13788v3 Announce Type: replace-cross
Abstract: The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and …

abstract arxiv chatgpt cs.ai cs.cl cs.cr cs.hc cs.lg generate generated harm however human impact language language models large language large language models llm llms misinformation public question research safety trust type will

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote