all AI news
Fine Tuning Phi 1.5 using QLoRA
May 20, 2024, 12:30 a.m. | Sovit Ranjan Rath
DebuggerCafe debuggercafe.com
In this article, we are fine tuning the Phi 1.5 model using QLoRA on the Stanford Alpaca dataset with Hugging Face Transformers.
The post Fine Tuning Phi 1.5 using QLoRA appeared first on DebuggerCafe.
alpaca article dataset face fine tuning phi 1.5 hugging face llms phi phi 1.4 qlora phi 1.5 fine tuning phi 1.5 stanford alpaca qlora qlora training phi 1.5 stanford stanford alpaca training phi 1.5 supervised fine-tuning training phi 1.5 qlora transformer transformers
More from debuggercafe.com / DebuggerCafe
Fine-Tuning GPT2 for Text Generation
1 month, 1 week ago |
debuggercafe.com
FasterViT for Semantic Segmentation
1 month, 2 weeks ago |
debuggercafe.com
FasterViT for Image Classification
1 month, 3 weeks ago |
debuggercafe.com
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Principal Data Architect - Azure & Big Data
@ MGM Resorts International | Home Office - US, NV
GN SONG MT Market Research Data Analyst 11
@ Accenture | Bengaluru, BDC7A