March 28, 2024, 4:41 a.m. | Sabrina Herbst, Vincenzo De Maio, Ivona Brandic

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.18579v1 Announce Type: new
Abstract: The increasing capabilities of Machine Learning (ML) models go hand in hand with an immense amount of data and computational power required for training. Therefore, training is usually outsourced into HPC facilities, where we have started to experience limits in scaling conventional HPC hardware, as theorized by Moore's law. Despite heavy parallelization and optimization efforts, current state-of-the-art ML models require weeks for training, which is associated with an enormous $CO_2$ footprint. Quantum Computing, and specifically …

abstract arxiv capabilities computational cs.et cs.lg data experience facilities hardware hpc machine machine learning networks neural networks power quantum quantum neural networks scaling training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US