all AI news
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution
March 6, 2024, 5:48 a.m. | Flor Miriam Plaza-del-Arco, Amanda Cercas Curry, Alba Curry, Gavin Abercrombie, Dirk Hovy
cs.CL updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five …
abstract analysis applications arxiv attribution biases cs.cl emotion gap gender language language models large language large language models llms men nlp stereotypes type women
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer - New Graduate
@ Applied Materials | Milan,ITA
Lead Machine Learning Scientist
@ Biogen | Cambridge, MA, United States