Feb. 20, 2024, 5:50 a.m. | Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.11406v1 Announce Type: new
Abstract: The fairness and trustworthiness of Large Language Models (LLMs) are receiving increasing attention. Implicit hate speech, which employs indirect language to convey hateful intentions, occupies a significant portion of practice. However, the extent to which LLMs effectively address this issue remains insufficiently examined. This paper delves into the capability of LLMs to detect implicit hate speech (Classification Task) and express confidence in their responses (Calibration Task). Our evaluation meticulously considers various prompt patterns and mainstream …

abstract arxiv attention cs.cl detection fairness go to hate speech hate speech detection language language models large language large language models limitations llms practice sensitivity speech type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne