Sept. 14, 2022, 1:11 a.m. | Anders Friis Kaas (1), Stilyan Petrov Paleykov (1), Ties Robroek (1), Pınar Tözün (1) ((1) IT University of Copenhagen)

cs.LG updates on arXiv.org arxiv.org

Deep learning training is an expensive process that extensively uses GPUs,
but not all model training saturates the modern powerful GPUs. Multi-Instance
GPU (MIG) is a new technology introduced by NVIDIA that can partition a GPU to
better fit workloads that don't require all the memory and compute resources of
a full GPU. In this paper, we examine the performance of a MIG-enabled A100 GPU
under deep learning workloads of three sizes focusing on image recognition
training with ResNet models. …

arxiv deep learning deep learning training gpus training

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada