May 25, 2022, 1:10 a.m. | Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De Cock, Golnoosh Farnadi

cs.LG updates on arXiv.org arxiv.org

Group fairness ensures that the outcome of machine learning (ML) based
decision making systems are not biased towards a certain group of people
defined by a sensitive attribute such as gender or ethnicity. Achieving group
fairness in Federated Learning (FL) is challenging because mitigating bias
inherently requires using the sensitive attribute values of all clients, while
FL is aimed precisely at protecting privacy by not giving access to the
clients' data. As we show in this paper, this conflict between …

arxiv fairness federated learning learning privacy

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Machine Learning Operations (MLOps) Engineer - Advisor

@ Peraton | Fort Lewis, WA, United States

Mid +/Senior Data Engineer (AWS/GCP)

@ Capco | Poland

Senior Software Engineer (ETL and Azure Databricks)|| RR/463/2024 || 4 - 7 Years

@ Emids | Bengaluru, India

Senior Data Scientist (H/F)

@ Business & Decision | Toulouse, France

Senior Analytics Engineer

@ Algolia | Paris, France