March 21, 2024, 4:43 a.m. | Sixing Yu, J. Pablo Mu\~noz, Ali Jannesari

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.11414v3 Announce Type: replace
Abstract: Foundation Models (FMs), such as LLaMA, BERT, GPT, ViT, and CLIP, have demonstrated remarkable success in a wide range of applications, driven by their ability to leverage vast amounts of data for pre-training. However, optimizing FMs often requires access to sensitive data, raising privacy concerns and limiting their applicability in many domains. In this paper, we propose the Federated Foundation Models (FFMs) paradigm, which combines the benefits of FMs and Federated Learning (FL) to enable …

abstract applications arxiv bert clip collaborative concerns cs.ai cs.cr cs.lg data foundation gpt however large models llama pre-training privacy success training type vast vit

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Director, Clinical Data Science

@ Aura | Remote USA

Research Scientist, AI (PhD)

@ Meta | Menlo Park, CA | New York City