all AI news
Dependable Distributed Training of Compressed Machine Learning Models
Feb. 23, 2024, 5:42 a.m. | Francesco Malandrino, Giuseppe Di Giacomo, Marco Levorato, Carla Fabiana Chiasserini
cs.LG updates on arXiv.org arxiv.org
Abstract: The existing work on the distributed training of machine learning (ML) models has consistently overlooked the distribution of the achieved learning quality, focusing instead on its average value. This leads to a poor dependability}of the resulting ML models, whose performance may be much worse than expected. We fill this gap by proposing DepL, a framework for dependable learning orchestration, able to make high-quality, efficient decisions on (i) the data to leverage for learning, (ii) the …
abstract arxiv cs.ai cs.lg distributed distribution leads machine machine learning machine learning models ml models performance quality training type value work
More from arxiv.org / cs.LG updates on arXiv.org
Digital Over-the-Air Federated Learning in Multi-Antenna Systems
2 days, 14 hours ago |
arxiv.org
Bagging Provides Assumption-free Stability
2 days, 14 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
RL Analytics - Content, Data Science Manager
@ Meta | Burlingame, CA
Research Engineer
@ BASF | Houston, TX, US, 77079