all AI news
Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for Hate Speech Detection. (arXiv:2204.07661v2 [cs.CL] UPDATED)
Web: http://arxiv.org/abs/2204.07661
cs.CL updates on arXiv.org arxiv.org
Recent work has emphasized the importance of balancing competing objectives
in model training (e.g., accuracy vs. fairness, or competing measures of
fairness). Such trade-offs reflect a broader class of multi-objective
optimization (MOO) problems in which optimization methods seek Pareto optimal
trade-offs between competing goals. In this work, we first introduce a
differentiable measure that enables direct optimization of group fairness
(specifically, balancing accuracy across groups) in model training. Next, we
demonstrate two model-agnostic MOO frameworks for learning Pareto optimal
parameterizations …
accuracy arxiv detection fairness hate speech learning speech