April 22, 2024, 4:42 a.m. | Guangyu Sun, Matias Mendieta, Aritra Dutta, Xin Li, Chen Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.12467v1 Announce Type: cross
Abstract: Multi-modal transformers mark significant progress in different domains, but siloed high-quality data hinders their further improvement. To remedy this, federated learning (FL) has emerged as a promising privacy-preserving paradigm for training models without direct access to the raw data held by different clients. Despite its potential, a considerable research direction regarding the unpaired uni-modal clients and the transformer architecture in FL remains unexplored. To fill this gap, this paper explores a transfer multi-modal federated learning …

abstract access arxiv cs.cv cs.lg data domains federated learning improvement modal multi-modal paradigm privacy progress quality quality data raw research training training models transformers type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne