Sept. 18, 2023, 1:25 p.m. | /u/--leockl--

Machine Learning www.reddit.com

I am reading these 3 articles below and it is still not clear to me what’s the best practice to follow to guide me in choosing which quantized Llama 2 model to use.

https://huggingface.co/blog/gptq-integration

https://huggingface.co/blog/overview-quantization-transformers

https://towardsai.net/p/machine-learning/gptq-quantization-on-a-llama-2-7b-fine-tuned-model-with-huggingface?amp=1

Questions:
1) I understand there are currently 4 quantized Llama 2 models (8, 4, 3, and 2-bit precision) to choose from. Is this right?
2) with the default Llama 2 model, how many bit precision is it?
3) are there any best practice guide …

articles clear guide llama llama 2 llama 2 model machinelearning practice questions reading

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US