all AI news
Domain Discrepancy Aware Distillation for Model Aggregation in Federated Learning. (arXiv:2210.02190v1 [cs.LG])
Oct. 6, 2022, 1:12 a.m. | Shangchao Su, Bin Li, Xiangyang Xue
cs.LG updates on arXiv.org arxiv.org
Knowledge distillation has recently become popular as a method of model
aggregation on the server for federated learning. It is generally assumed that
there are abundant public unlabeled data on the server. However, in reality,
there exists a domain discrepancy between the datasets of the server domain and
a client domain, which limits the performance of knowledge distillation. How to
improve the aggregation under such a domain discrepancy setting is still an
open problem. In this paper, we first analyze …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer
@ Chubb | Simsbury, CT, United States
Research Analyst , NA Light Vehicle Powertrain Forecasting
@ S&P Global | US - MI - VIRTUAL
Sr. Data Scientist - ML Ops Job
@ Yash Technologies | Indore, IN
Alternance-Data Management
@ Keolis | Courbevoie, FR, 92400