Feb. 6, 2024, 5:43 a.m. | Yongdeok Kim Jaehyung Ahn Myeongwoo Kim Changin Choi Heejae Kim Narankhuu Tuvshinjargal Seungwon Lee

cs.LG updates on arXiv.org arxiv.org

Speeding up the large-scale distributed training is challenging in that it requires improving various components of training including load balancing, communication, optimizers, etc. We present novel approaches for fast large-scale training of BERT model which individually ameliorates each component thereby leading to a new level of BERT training performance. Load balancing is imperative in distributed BERT training since its training datasets are characterized by samples with various lengths. Communication cost, which is proportional to the scale of distributed training, needs …

bert breaking case case study communication components cs.cl cs.lg distributed etc mlperf mlperf training novel performance scale study training

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv