March 19, 2024, 4:54 a.m. | Eve Fleisig, Rediet Abebe, Dan Klein

cs.CL updates on arXiv.org arxiv.org

arXiv:2305.06626v5 Announce Type: replace
Abstract: Though majority vote among annotators is typically used for ground truth labels in natural language processing, annotator disagreement in tasks such as hate speech detection may reflect differences in opinion across groups, not noise. Thus, a crucial problem in hate speech detection is determining whether a statement is offensive to the demographic group that it targets, when that group may constitute a small fraction of the annotator pool. We construct a model that predicts individual …

abstract arxiv cs.ai cs.cl detection differences hate speech hate speech detection labels language language processing modeling natural natural language natural language processing noise opinion processing speech tasks truth type vote

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US