all AI news
Flexible Parallel Learning in Edge Scenarios: Communication, Computational and Energy Cost. (arXiv:2201.07402v1 [cs.NI])
Jan. 20, 2022, 2:10 a.m. | Francesco Malandrino, Carla Fabiana Chiasserini
cs.LG updates on arXiv.org arxiv.org
Traditionally, distributed machine learning takes the guise of (i) different
nodes training the same model (as in federated learning), or (ii) one model
being split among multiple nodes (as in distributed stochastic gradient
descent). In this work, we highlight how fog- and IoT-based scenarios often
require combining both approaches, and we present a framework for flexible
parallel learning (FPL), achieving both data and model parallelism. Further, we
investigate how different ways of distributing and parallelizing learning tasks
across the participating …
More from arxiv.org / cs.LG updates on arXiv.org
Generalized Schr\"odinger Bridge Matching
1 day, 2 hours ago |
arxiv.org
Tight bounds on Pauli channel learning without entanglement
1 day, 2 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst
@ SEAKR Engineering | Englewood, CO, United States
Data Analyst II
@ Postman | Bengaluru, India
Data Architect
@ FORSEVEN | Warwick, GB
Director, Data Science
@ Visa | Washington, DC, United States
Senior Manager, Data Science - Emerging ML
@ Capital One | McLean, VA