March 28, 2024, 4:48 a.m. | Yanshen Sun, Jianfeng He, Limeng Cui, Shuo Lei, Chang-Tien Lu

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.18249v1 Announce Type: new
Abstract: Recent advancements in Large Language Models (LLMs) have enabled the creation of fake news, particularly in complex fields like healthcare. Studies highlight the gap in the deceptive power of LLM-generated fake news with and without human assistance, yet the potential of prompting techniques has not been fully explored. Thus, this work aims to determine whether prompting strategies can effectively narrow this gap. Current LLM-based fake news attacks require human intervention for information gathering and often …

abstract arxiv challenges cs.cl cs.si detection fake fake news fields gap generated healthcare highlight human language language models large language large language models llm llms power studies study type world

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior ML Engineer

@ Carousell Group | Ho Chi Minh City, Vietnam

Data and Insight Analyst

@ Cotiviti | Remote, United States