all AI news
GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
June 21, 2024, 4:42 a.m. | Tao Zhang, Ziqian Zeng, Yuxiang Xiao, Huiping Zhuang, Cen Chen, James Foulds, Shimei Pan
cs.CL updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) are prone to generating content that exhibits gender biases, raising significant ethical concerns. Alignment, the process of fine-tuning LLMs to better align with desired behaviors, is recognized as an effective approach to mitigate gender biases. Although proprietary LLMs have made significant strides in mitigating gender bias, their alignment datasets are not publicly available. The commonly used and publicly available alignment dataset, HH-RLHF, still exhibits gender bias to some extent. There is …
abstract alignment arxiv bias biases concerns cs.ai cs.cl dataset ethical fine-tuning gender gender bias language language models large language large language models llms process proprietary tuning type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Quality Specialist - JAVA
@ SAP | Bengaluru, IN, 560066
Aktuar Financial Lines (m/w/d)
@ Zurich Insurance | Köln, DE
Senior Network Engineer
@ ManTech | 054H - 124TchnlgyPrkWy,SBurlington,VT
Pricing Analyst
@ EDF | Exeter, GB
Specialist IS Engineer
@ Amgen | US - California - Thousand Oaks - Field/Remote