Oct. 30, 2023, 2:26 p.m. | /u/TensorTamer

Machine Learning www.reddit.com

I just discovered the FP8-LM paper from MS: [\[2310.18313\] FP8-LM: Training FP8 Large Language Models (arxiv.org)](https://arxiv.org/abs/2310.18313).

This is their repo link: [Azure/MS-AMP: Microsoft Automatic Mixed Precision Library (github.com)](https://github.com/azure/ms-amp)



[paper abstraction](https://preview.redd.it/6g76v5egncxb1.png?width=817&format=png&auto=webp&s=468cf4614be4caca89a66b2646badded2ff8fadb)

My Key Takeaways:

* The **whole-loop** for FP8 “GPT-style” large model training is successfully done by FP8-LM team, including data cleaning, infrastructure development, model pretraining, alignment (SFT, RS, RLHF, etc.)
* Their FP8 mixed-precision training framework got **42%** reduction in memory usage, and ran **64%** faster than BF16 Megatron-LM; also …

alignment data data cleaning development etc faster framework gpt infrastructure loop machinelearning megatron memory mixed mixed-precision nvidia precision rlhf sft team training transformer usage

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Codec Avatars Research Engineer

@ Meta | Pittsburgh, PA