all AI news
[N] Fast GPT Training Infra, FP8-LM, being 64% faster than BF16 on H100—Unlocking even more gigantic GPT
Oct. 30, 2023, 2:26 p.m. | /u/TensorTamer
Machine Learning www.reddit.com
This is their repo link: [Azure/MS-AMP: Microsoft Automatic Mixed Precision Library (github.com)](https://github.com/azure/ms-amp)
[paper abstraction](https://preview.redd.it/6g76v5egncxb1.png?width=817&format=png&auto=webp&s=468cf4614be4caca89a66b2646badded2ff8fadb)
My Key Takeaways:
* The **whole-loop** for FP8 “GPT-style” large model training is successfully done by FP8-LM team, including data cleaning, infrastructure development, model pretraining, alignment (SFT, RS, RLHF, etc.)
* Their FP8 mixed-precision training framework got **42%** reduction in memory usage, and ran **64%** faster than BF16 Megatron-LM; also …
alignment data data cleaning development etc faster framework gpt infrastructure loop machinelearning megatron memory mixed mixed-precision nvidia precision rlhf sft team training transformer usage
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Codec Avatars Research Engineer
@ Meta | Pittsburgh, PA