March 20, 2024, 4:48 a.m. | Zonghai Yao, Ahmed Jaafar, Beining Wang, Zhichao Yang, Hong Yu

cs.CL updates on arXiv.org arxiv.org

arXiv:2311.09684v2 Announce Type: replace
Abstract: This study examines the effect of prompt engineering on the performance of Large Language Models (LLMs) in clinical note generation. We introduce an Automatic Prompt Optimization (APO) framework to refine initial prompts and compare the outputs of medical experts, non-medical experts, and APO-enhanced GPT3.5 and GPT4. Results highlight GPT4 APO's superior performance in standardizing prompt quality across clinical note sections. A human-in-the-loop approach shows that experts maintain content quality post-APO, with a preference for their …

abstract arxiv clinical cs.ai cs.cl engineering framework language language models large language large language models llms medical optimization performance physicians prompt prompts refine study type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote