May 9, 2023, 2:36 a.m. | /u/CacheMeUp

Machine Learning www.reddit.com

NLP task at the prototype stage. Can be solved either with retriever-reader approach or fine-tuning an LLM. Pretty focused so no need for wide-spread general capabilities. What would make you invest in training your own model (e.g. fine-tuning MPT/LLama with LoRA) vs. using OpenAI with an optimized prompt? (the data fits in 4K tokens).

​

Pros for OpenAI:

1. Prompt engineering is simpler.
2. Retriever-reader (adding the information to the prompt and asking) allows grounding by asking to cite the …

data fine-tuning general llama llm lora machinelearning nlp openai prompt stage training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru