Jan. 29, 2024, 9:33 p.m. | /u/MaintenanceNo5993

Machine Learning www.reddit.com

- Task: Training and Fine Tuning on Single node 2 GPUs
- Model: CLIP ViT-B-32
- Dataset: MSCOCO Captions
- Number of Workers: 4
- Batch Size: 240 in case of FP16 and 160 in case of FP32

For both FP32 and FP16, each epoch is taking around 6 mins.

One of reason I consider is that *majority of time might constitute of data movement* rather GPU processing, as in case of FP32 there's hardly a moment when GPU utilization …

captions case clip dataset fp16 gpus machinelearning node reason speed training vit workers

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US