July 4, 2023, 2:27 p.m. | /u/juliensalinas

Machine Learning www.reddit.com

Hello,

Fine-tuning your own large language model is the best way to achieve state-of-the-art results, even better than ChatGPT or GPT-4, especially if you fine-tune a modern AI model like LLaMA, OpenLLaMA, or XGen.

Properly fine-tuning these models is not necessarily easy though, so I made an A to Z tutorial about fine-tuning these models with JAX on both GPUs and TPUs, using the EasyLM library.

Here it is: [https://nlpcloud.com/how-to-fine-tune-llama-openllama-xgen-with-jax-on-tpu-gpu.html](https://nlpcloud.com/how-to-fine-tune-llama-openllama-xgen-with-jax-on-tpu-gpu.html?utm_source=reddit&utm_campaign=i859w625-3816-11ed-a261-0242ac140017)

I hope it will be helpful! If you think that …

ai model art chatgpt easy fine-tuning gpt gpt-4 gpu jax language language model large language large language model llama machinelearning modern modern ai state tpu

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV

GN SONG MT Market Research Data Analyst 11

@ Accenture | Bengaluru, BDC7A