all AI news
Characterizing Intersectional Group Fairness with Worst-Case Comparisons. (arXiv:2101.01673v5 [cs.LG] UPDATED)
May 6, 2022, 1:11 a.m. | Avijit Ghosh, Lea Genuit, Mary Reagan
cs.LG updates on arXiv.org arxiv.org
Machine Learning or Artificial Intelligence algorithms have gained
considerable scrutiny in recent times owing to their propensity towards
imitating and amplifying existing prejudices in society. This has led to a
niche but growing body of work that identifies and attempts to fix these
biases. A first step towards making these algorithms more fair is designing
metrics that measure unfairness. Most existing work in this field deals with
either a binary view of fairness (protected vs. unprotected groups) or
politically defined …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Computer Vision Engineer
@ Motive | Pakistan - Remote
Data Analyst III
@ Fanatics | New York City, United States
Senior Data Scientist - Experian Health (This role is remote, from anywhere in the U.S.)
@ Experian | ., ., United States
Senior Data Engineer
@ Springer Nature Group | Pune, IN