March 17, 2024, 10:40 a.m. | /u/Thick-brain-dude

Machine Learning www.reddit.com

Hi, I wonder how to use different "distributed training strategies" to fine-tune Mixtral.

I have access to 4\* A100 (40Gb) and want to try different strategies, like sharding the model and putting 2 expert layers on each GPU, quantizing the model using QLoRA, and using data parallelism across the 4 GPUs.

I have access to 4\* A100 (40Gb) and want to try different strategies, like sharding the model and putting 2 expert layers on each GPU, or quantizing the model …

a100 data distributed expert gpu gpus machinelearning mixtral qlora sharding strategies strategy training

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US