April 2, 2024, 7:52 p.m. | Zilong Wang, Xufang Luo, Xinyang Jiang, Dongsheng Li, Lili Qiu

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.00998v1 Announce Type: new
Abstract: Evaluating generated radiology reports is crucial for the development of radiology AI, but existing metrics fail to reflect the task's clinical requirements. This study proposes a novel evaluation framework using large language models (LLMs) to compare radiology reports for assessment. We compare the performance of various LLMs and demonstrate that, when using GPT-4, our proposed metric achieves evaluation consistency close to that of radiologists. Furthermore, to reduce costs and improve accessibility, making this method practical, …

abstract arxiv assessment clinical cs.ai cs.cl development evaluation framework generated language language models large language large language models llm llms metrics novel performance radiologist radiology ray report reports requirements study type x-ray

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

Customer Data Analyst with Spanish

@ Michelin | Voluntari

HC Data Analyst - Senior

@ Leidos | 1662 Intelligence Community Campus - Bethesda MD

Healthcare Research & Data Analyst- Infectious, Niche, Rare Disease

@ Clarivate | Remote (121- Massachusetts)

Data Analyst (maternity leave cover)

@ Clarivate | R155-Belgrade

Sales Enablement Data Analyst (Remote)

@ CrowdStrike | USA TX Remote