May 13, 2024, 4:41 a.m. | Bhawesh Kumar, Jonathan Amar, Eric Yang, Nan Li, Yugang Jia

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.06093v1 Announce Type: new
Abstract: Large Language Models (LLMs) have demonstrated their efficacy across a broad spectrum of tasks in healthcare applications. However, often LLMs need to be fine-tuned on task-specific expert annotated data to achieve optimal performance, which can be expensive and time consuming. In this study, we fine-tune PaLM-2 with parameter efficient fine-tuning (PEFT) using noisy labels obtained from gemini-pro 1.0 for the detection of Schedule-of-Event (SoE) tables, which specify care plan in clinical trial protocols. We introduce …

abstract annotated data annotation applications arxiv case case study cs.cl cs.lg data detection event expert fine-tuning healthcare however human language language models large language large language models llm llms performance reduce reliance spectrum study table table detection tasks type

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Werkstudent Data Architecture & Governance (w/m/d)

@ E.ON | Essen, DE

Data Architect, Data Lake, Professional Services

@ Amazon.com | Bogota, DC, COL

Data Architect, Data Lake, Professional Services

@ Amazon.com | Buenos Aires City, Buenos Aires Autonomous City, ARG

Data Architect

@ Bitful | United States - Remote