March 29, 2024, 4:42 a.m. | Yiyuan Yang, Guodong Long, Tao Shen, Jing Jiang, Michael Blumenstein

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.19211v1 Announce Type: new
Abstract: Recently, foundation models, particularly large language models (LLMs), have demonstrated an impressive ability to adapt to various tasks by fine-tuning large amounts of instruction data. Notably, federated foundation models emerge as a privacy preservation method to fine-tune models collaboratively under federated learning (FL) settings by leveraging many distributed datasets with non-IID data. To alleviate communication and computation overhead, parameter-efficient methods are introduced for efficiency, and some research adapted personalization methods to federated foundation models for …

abstract adapt adapter arxiv cs.ai cs.cl cs.lg data datasets distributed federated learning fine-tuning foundation language language models large language large language models llms preservation privacy tasks type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York