all AI news
[D] Distributed Training Strategy
March 17, 2024, 10:40 a.m. | /u/Thick-brain-dude
Machine Learning www.reddit.com
I have access to 4\* A100 (40Gb) and want to try different strategies, like sharding the model and putting 2 expert layers on each GPU, quantizing the model using QLoRA, and using data parallelism across the 4 GPUs.
I have access to 4\* A100 (40Gb) and want to try different strategies, like sharding the model and putting 2 expert layers on each GPU, or quantizing the model …
a100 data distributed expert gpu gpus machinelearning mixtral qlora sharding strategies strategy training
More from www.reddit.com / Machine Learning
[P] Table Extraction , Text Extraction
22 hours ago |
www.reddit.com
[D] Why Gemma has such crazy big MLP hidden dim size?
1 day, 1 hour ago |
www.reddit.com
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Sr. BI Analyst
@ AkzoNobel | Pune, IN