June 11, 2024, 11:41 a.m. | /u/azalio

Machine Learning www.reddit.com

At Yandex, we’ve developed an enhanced version of FSDP, called YaFSDP, which shows an impressive speedup of up to 26% (compared to FSDP) in LLM training time and huge savings in GPU resources. For instance, in a pre-training scenario involving a model with 70 billion parameters, using YaFSDP can save the resources of approximately 150 GPUs, which translates to roughly $0.5 to $1.5 million (depending on the virtual GPU provider or platform) in potential monthly savings. 

YaFSDP is open-sourced, so …

billion gpu gpu resources improvement instance llm llm training machinelearning parameters pre-training resources save shows training yandex

Senior Data Engineer

@ Displate | Warsaw

Sr. Specialist, Research Automation Systems Integrator (Hybrid)

@ MSD | USA - Pennsylvania - West Point

Lead Developer-Process Automation -Python Developer

@ Diageo | Bengaluru Karle Town SEZ

RPA Engineer- Power Automate Desktop, UI Path

@ Baker Hughes | IN-KA-BANGALORE-NEON BUILDING WEST TOWER

Research Fellow (Computer Science (and Engineering)/Electronic Engineering/Applied Mathematics/Perception Sciences)

@ Nanyang Technological University | NTU Main Campus, Singapore

Analista de Ciências de dados II

@ Ingram Micro | BR Link - São Paulo