May 12, 2022, 4:33 p.m. | TensorFlow

TensorFlow www.youtube.com

Discover several different distribution strategies and related concepts for data and model parallel training. Walk through an example of training a 39 billion parameter language model on TPUs, and conclude with the challenges and best practices of orchestrating large scale language model training.

Resource:
TensorFlow website → https://goo.gle/3KejoUZ

Speakers: Nikita Namjoshi, Vaibhav Singh

Watch more:
All Google I/O 2022 Sessions → https://goo.gle/IO22_AllSessions
ML/AI at I/O 2022 playlist → https://goo.gle/IO22_ML-AI
All Google I/O 2022 technical sessions → https://goo.gle/IO22_Sessions

Subscribe to TensorFlow …

distributed tips training tricks

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Science Specialist

@ Telstra | Telstra ICC Bengaluru

Senior Staff Engineer, Machine Learning

@ Nagarro | Remote, India