Dec. 7, 2023, 11:11 p.m. | /u/cowzombi

Machine Learning www.reddit.com

I read through the [Google Gemini technical report](https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf?fbclid=IwAR1gpGBrxBkseJKtYDRZEkspFhdCJ8u4fvOCsgVlZyr0KBiTWwERMA_eYvw) yesterday. It was pretty vague and not that interesting but one section got me wondering. Section 3 "Training Infrastructure" mentioned all of the technical challenges they faced including dealing with "cosmic rays" and other rare failure modes that happen at the scale of thousands of chips.

I haven't heard of GPU training having the same issues with cosmic rays and other large AI labs haven't mentioned how challenging training at this scale …

achievement advanced ai labs challenges cosmic rays google gpu infra labs machinelearning making scale sound training

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US