Jan. 14, 2024, 11:55 p.m. | /u/Im_The_Tall_Guy

Machine Learning www.reddit.com

Hey everyone! I’ve been doing research on quantizing llms and I have a couple of custom methods that I’d like to test out. Looking at existing implementations like Tim Dettmers’ bitsandbytes makes me feel as lost as ever. Looking at llama.cpp source hasn’t helped much either. Has anyone had experience with implementing and more importantly evaluating a custom quantization method? Please share any thoughts and if you have any questions please feel free to ask. Thnaks!

cpp hey llama llms lost machinelearning quantization research test

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US