Feb. 6, 2024, 5:53 a.m. | YuHe Ke Liyuan Jin Kabilan Elangovan Hairil Rizal Abdullah Nan Liu Alex Tiong Heng Sia Chai Rick Soh

cs.CL updates on arXiv.org arxiv.org

Purpose: Large Language Models (LLMs) hold significant promise for medical applications. Retrieval Augmented Generation (RAG) emerges as a promising approach for customizing domain knowledge in LLMs. This case study presents the development and evaluation of an LLM-RAG pipeline tailored for healthcare, focusing specifically on preoperative medicine.
Methods: We developed an LLM-RAG model using 35 preoperative guidelines and tested it against human-generated responses, with a total of 1260 responses evaluated. The RAG process involved converting clinical documents into text using Python-based …

applications case case study cs.ai cs.cl development domain domain knowledge evaluation healthcare knowledge language language models large language large language models llm llms medical pipeline rag report retrieval retrieval augmented generation study testing

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer

@ Apple | Sunnyvale, California, United States