Dec. 7, 2023, 11:11 p.m. | /u/cowzombi

Machine Learning www.reddit.com

I read through the [Google Gemini technical report](https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf?fbclid=IwAR1gpGBrxBkseJKtYDRZEkspFhdCJ8u4fvOCsgVlZyr0KBiTWwERMA_eYvw) yesterday. It was pretty vague and not that interesting but one section got me wondering. Section 3 "Training Infrastructure" mentioned all of the technical challenges they faced including dealing with "cosmic rays" and other rare failure modes that happen at the scale of thousands of chips.

I haven't heard of GPU training having the same issues with cosmic rays and other large AI labs haven't mentioned how challenging training at this scale …

achievement advanced ai labs challenges cosmic rays google gpu infra labs machinelearning making scale sound training

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN