all AI news
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture. (arXiv:2401.08406v3 [cs.CL] UPDATED)
cs.CL updates on arXiv.org arxiv.org
There are two common ways in which developers are incorporating proprietary
and domain-specific data when building applications of Large Language Models
(LLMs): Retrieval-Augmented Generation (RAG) and Fine-Tuning. RAG augments the
prompt with the external data, while fine-Tuning incorporates the additional
knowledge into the model itself. However, the pros and cons of both approaches
are not well understood. In this paper, we propose a pipeline for fine-tuning
and RAG, and present the tradeoffs of both for multiple popular LLMs, including
Llama2-13B, …
agriculture applications arxiv building case case study cs.cl data developers domain external data fine-tuning knowledge language language models large language large language models llms pipelines prompt proprietary rag retrieval retrieval-augmented study the prompt