all AI news
PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning. (arXiv:2205.11584v1 [cs.LG])
May 25, 2022, 1:10 a.m. | Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De Cock, Golnoosh Farnadi
cs.LG updates on arXiv.org arxiv.org
Group fairness ensures that the outcome of machine learning (ML) based
decision making systems are not biased towards a certain group of people
defined by a sensitive attribute such as gender or ethnicity. Achieving group
fairness in Federated Learning (FL) is challenging because mitigating bias
inherently requires using the sensitive attribute values of all clients, while
FL is aimed precisely at protecting privacy by not giving access to the
clients' data. As we show in this paper, this conflict between …
More from arxiv.org / cs.LG updates on arXiv.org
Regularization by Texts for Latent Diffusion Inverse Solvers
1 day, 16 hours ago |
arxiv.org
When can transformers reason with abstract symbols?
1 day, 16 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
Machine Learning Operations (MLOps) Engineer - Advisor
@ Peraton | Fort Lewis, WA, United States
Mid +/Senior Data Engineer (AWS/GCP)
@ Capco | Poland
Senior Software Engineer (ETL and Azure Databricks)|| RR/463/2024 || 4 - 7 Years
@ Emids | Bengaluru, India
Senior Data Scientist (H/F)
@ Business & Decision | Toulouse, France
Senior Analytics Engineer
@ Algolia | Paris, France