all AI news
Characterizing Intersectional Group Fairness with Worst-Case Comparisons. (arXiv:2101.01673v5 [cs.LG] UPDATED)
Web: http://arxiv.org/abs/2101.01673
May 6, 2022, 1:11 a.m. | Avijit Ghosh, Lea Genuit, Mary Reagan
cs.LG updates on arXiv.org arxiv.org
Machine Learning or Artificial Intelligence algorithms have gained
considerable scrutiny in recent times owing to their propensity towards
imitating and amplifying existing prejudices in society. This has led to a
niche but growing body of work that identifies and attempts to fix these
biases. A first step towards making these algorithms more fair is designing
metrics that measure unfairness. Most existing work in this field deals with
either a binary view of fairness (protected vs. unprotected groups) or
politically defined …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Data Analyst, Patagonia Action Works
@ Patagonia | Remote
Data & Insights Strategy & Innovation General Manager
@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX
Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis
@ Ahmedabad University | Ahmedabad, India
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC