March 18, 2024, 4:41 a.m. | Robert Lakatos, Peter Pollner, Andras Hajdu, Tamas Joo

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.09727v1 Announce Type: cross
Abstract: The development of generative large language models (G-LLM) opened up new opportunities for the development of new types of knowledge-based systems similar to ChatGPT, Bing, or Gemini. Fine-tuning (FN) and Retrieval-Augmented Generation (RAG) are the techniques that can be used to implement domain adaptation for the development of G-LLM-based knowledge systems. In our study, using ROUGE, BLEU, METEOR scores, and cosine similarity, we compare and examine the performance of RAG and FN for the GPT-J-6B, …

abstract arxiv bing chatgpt cs.ai cs.cl cs.lg development fine-tuning gemini generative knowledge language language models large language large language models llm opportunities performance rag retrieval retrieval-augmented systems type types

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Codec Avatars Research Engineer

@ Meta | Pittsburgh, PA