Nov. 5, 2023, 6:44 a.m. | Dongyang Fan, Celestine Mendler-Dünner, Martin Jaggi

cs.LG updates on arXiv.org arxiv.org

We consider a collaborative learning setting where the goal of each agent is
to improve their own model by leveraging the expertise of collaborators, in
addition to their own training data. To facilitate the exchange of expertise
among agents, we propose a distillation-based method leveraging shared
unlabeled auxiliary data, which is pseudo-labeled by the collective. Central to
our method is a trust weighting scheme that serves to adaptively weigh the
influence of each collaborator on the pseudo-labels until a consensus …

agent agents arxiv collaborative consensus data distillation expertise prediction the exchange training training data

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US