all AI news
Incentivising the federation: gradient-based metrics for data selection and valuation in private decentralised training
April 17, 2024, 4:43 a.m. | Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
cs.LG updates on arXiv.org arxiv.org
Abstract: Obtaining high-quality data for collaborative training of machine learning models can be a challenging task due to A) regulatory concerns and B) a lack of data owner incentives to participate. The first issue can be addressed through the combination of distributed machine learning techniques (e.g. federated learning) and privacy enhancing technologies (PET), such as the differentially private (DP) model training. The second challenge can be addressed by rewarding the participants for giving access to data …
abstract arxiv collaborative combination concerns cs.ai cs.cr cs.lg data decentralised distributed federation gradient incentives issue machine machine learning machine learning models metrics quality quality data regulatory through training type valuation
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst (Digital Business Analyst)
@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore