Feb. 5, 2024, 12:32 p.m. | /u/kiranp2

Machine Learning www.reddit.com

I am exploring whether fine-tuning works for teaching a new programming language to Code LLama2 that it was not trained on, for example, Ruby. Has anyone tried this?

I tried few-shot prompting. Code LLama seems to be doing a fair job. Not sure if anyone has done full-blown fine-tuning on any new languages and want to hear the success rate.

code code llama example fair few-shot fine-tuning job language language i llama llama2 machinelearning new programming language programing programming programming language prompting ruby teaching

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States