Oct. 30, 2023, 2:26 p.m. | /u/TensorTamer

Machine Learning www.reddit.com

I just discovered the FP8-LM paper from MS: [\[2310.18313\] FP8-LM: Training FP8 Large Language Models (arxiv.org)](https://arxiv.org/abs/2310.18313).

This is their repo link: [Azure/MS-AMP: Microsoft Automatic Mixed Precision Library (github.com)](https://github.com/azure/ms-amp)



[paper abstraction](https://preview.redd.it/6g76v5egncxb1.png?width=817&format=png&auto=webp&s=468cf4614be4caca89a66b2646badded2ff8fadb)

My Key Takeaways:

* The **whole-loop** for FP8 “GPT-style” large model training is successfully done by FP8-LM team, including data cleaning, infrastructure development, model pretraining, alignment (SFT, RS, RLHF, etc.)
* Their FP8 mixed-precision training framework got **42%** reduction in memory usage, and ran **64%** faster than BF16 Megatron-LM; also …

alignment data data cleaning development etc faster framework gpt infrastructure loop machinelearning megatron memory mixed mixed-precision nvidia precision rlhf sft team training transformer usage

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US