July 4, 2023, 2:24 p.m. | /u/juliensalinas

Natural Language Processing www.reddit.com

Hello,
Fine-tuning your own large language model is the best way to achieve state-of-the-art results, even better than ChatGPT or GPT-4, especially if you fine-tune a modern AI model like LLaMA, OpenLLaMA, or XGen.
Properly fine-tuning these models is not necessarily easy though, so I made an A to Z tutorial about fine-tuning these models with JAX on both GPUs and TPUs, using the EasyLM library.
Here it is: [https://nlpcloud.com/how-to-fine-tune-llama-openllama-xgen-with-jax-on-tpu-gpu.html](https://nlpcloud.com/how-to-fine-tune-llama-openllama-xgen-with-jax-on-tpu-gpu.html?utm_source=reddit&utm_campaign=i859w625-3816-11ed-a261-0242ac140014)
I hope it will be helpful! If you think that …

ai model art chatgpt easy fine-tuning gpt gpt-4 gpu jax language language model languagetechnology large language large language model llama modern modern ai state tpu

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training

@ Amazon.com | Cupertino, California, USA