Feb. 13, 2024, 5:42 a.m. | Yuecheng Li Tong Wang Chuan Chen Jian Lou Bin Chen Lei Yang Zibin Zheng

cs.LG updates on arXiv.org arxiv.org

To defend against privacy leakage of user data, differential privacy is widely used in federated learning, but it is not free. The addition of noise randomly disrupts the semantic integrity of the model and this disturbance accumulates with increased communication rounds. In this paper, we introduce a novel federated learning framework with rigorous privacy guarantees, named FedCEO, designed to strike a trade-off between model utility and user privacy by letting clients ''Collaborate with Each Other''. Specifically, we perform efficient tensor …

communication cs.ai cs.cr cs.lg data differential differential privacy federated learning free improvement integrity noise paper privacy semantic trade trade-off user data utility

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US