March 17, 2024, 10:40 a.m. | /u/Thick-brain-dude

Machine Learning www.reddit.com

Hi, I wonder how to use different "distributed training strategies" to fine-tune Mixtral.

I have access to 4\* A100 (40Gb) and want to try different strategies, like sharding the model and putting 2 expert layers on each GPU, quantizing the model using QLoRA, and using data parallelism across the 4 GPUs.

I have access to 4\* A100 (40Gb) and want to try different strategies, like sharding the model and putting 2 expert layers on each GPU, or quantizing the model …

a100 data distributed expert gpu gpus machinelearning mixtral qlora sharding strategies strategy training

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN