April 16, 2023, 4:37 p.m. | /u/Mbando

Natural Language Processing www.reddit.com

I'm interested in trying to fine-tune Alpaca (or similar instruct-trained model) on a dataset of domain-specific text as a text--I want to see how that impacts prompt/response for the domain. Does doing that seriously degrade the instruct-training, or does the model still answer prompts reasonably well/better in the new domain.

All the examples I'm finding are replicating how the Stanford team trained LLaMa--I want to take Alpaca and train it on X docs. Anyone know of an example notebook I …

alpaca dataset example examples fine-tuning impacts languagetechnology llama notebook prompt prompts stanford team text training

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne