Jan. 20, 2022, 2:10 a.m. | Francesco Malandrino, Carla Fabiana Chiasserini

cs.LG updates on arXiv.org arxiv.org

Traditionally, distributed machine learning takes the guise of (i) different
nodes training the same model (as in federated learning), or (ii) one model
being split among multiple nodes (as in distributed stochastic gradient
descent). In this work, we highlight how fog- and IoT-based scenarios often
require combining both approaches, and we present a framework for flexible
parallel learning (FPL), achieving both data and model parallelism. Further, we
investigate how different ways of distributing and parallelizing learning tasks
across the participating …

arxiv communication computational edge energy learning

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst

@ SEAKR Engineering | Englewood, CO, United States

Data Analyst II

@ Postman | Bengaluru, India

Data Architect

@ FORSEVEN | Warwick, GB

Director, Data Science

@ Visa | Washington, DC, United States

Senior Manager, Data Science - Emerging ML

@ Capital One | McLean, VA