Jan. 28, 2022, 2:11 a.m. | Jinhui Yuan, Xinqi Li, Cheng Cheng, Juncheng Liu, Ran Guo, Shenghang Cai, Chi Yao, Fei Yang, Xiaodong Yi, Chuan Wu, Haoran Zhang, Jie Zhao

cs.LG updates on arXiv.org arxiv.org

Deep learning frameworks such as TensorFlow and PyTorch provide a productive
interface for expressing and training a deep neural network (DNN) model on a
single device or using data parallelism. Still, they may not be flexible or
efficient enough in training emerging large models on distributed devices,
which require more sophisticated parallelism beyond data parallelism. Plugins
or wrappers have been developed to strengthen these frameworks for model or
pipeline parallelism, but they complicate the usage and implementation of
distributed deep …

arxiv deep learning deep learning framework distributed framework learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analytics & Insight Specialist, Customer Success

@ Fortinet | Ottawa, ON, Canada

Account Director, ChatGPT Enterprise - Majors

@ OpenAI | Remote - Paris