all AI news
GRASP: A Disagreement Analysis Framework to Assess Group Associations in Perspectives
June 17, 2024, 4:41 a.m. | Vinodkumar Prabhakaran, Christopher Homan, Lora Aroyo, Aida Mostafazadeh Davani, Alicia Parrish, Alex Taylor, Mark D\'iaz, Ding Wang, Gregory Serapio-
cs.CL updates on arXiv.org arxiv.org
Abstract: Human annotation plays a core role in machine learning -- annotations for supervised models, safety guardrails for generative models, and human feedback for reinforcement learning, to cite a few avenues. However, the fact that many of these human annotations are inherently subjective is often overlooked. Recent work has demonstrated that ignoring rater subjectivity (typically resulting in rater disagreement) is problematic within specific tasks and for specific subgroups. Generalizable methods to harness rater disagreement and thus …
abstract analysis annotation annotations arxiv core cs.ai cs.cl feedback framework generative generative models guardrails however human human feedback machine machine learning perspectives reinforcement reinforcement learning replace role safety type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Data Engineer
@ Displate | Warsaw
Data Architect
@ Unison Consulting Pte Ltd | Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia
Data Architect
@ Games Global | Isle of Man, Isle of Man
Enterprise Data Architect
@ Ent Credit Union | Colorado Springs, CO, United States
Lead Data Architect (AWS, Azure, GCP)
@ CapTech Consulting | Chicago, IL, United States