May 28, 2023, noon | code_your_own_AI

code_your_own_AI www.youtube.com

Given the open-source Code LLMs from 2B to 16B model size, now we can fine-tune our CODE LLM with our Instruction Fine-tuning data set. Real-time demo: Colab NB to fine-tune a Code LLM (StarCoder) on a specific data set.

The author discusses the process of fine-tuning code language models (LLMs) and provides instructions on how to perform it. They emphasize the importance of creating a fine-tuning dataset with specific instructions for improved performance. The author demonstrates examples of providing instructions …

author code code llm code llms coding colab data data set demo fine-tuning language language models llm llms process real-time set skills starcoder

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AI Engineering Manager

@ M47 Labs | Barcelona, Catalunya [Cataluña], Spain