all AI news
FedOBD: Opportunistic Block Dropout for Efficiently Training Large-scale Neural Networks through Federated Learning. (arXiv:2208.05174v3 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
Large-scale neural networks possess considerable expressive power. They are
well-suited for complex learning tasks in industrial applications. However,
large-scale models pose significant challenges for training under the current
Federated Learning (FL) paradigm. Existing approaches for efficient FL training
often leverage model parameter dropout. However, manipulating individual model
parameters is not only inefficient in meaningfully reducing the communication
overhead when training large-scale FL models, but may also be detrimental to
the scaling efforts and model performance as shown by recent research. …
arxiv dropout federated learning networks neural networks scale training