April 30, 2024, 4:44 a.m. | Yanis Labrak, Mickael Rouvier, Richard Dufour

cs.LG updates on arXiv.org arxiv.org

arXiv:2307.12114v2 Announce Type: replace-cross
Abstract: We evaluate four state-of-the-art instruction-tuned large language models (LLMs) -- ChatGPT, Flan-T5 UL2, Tk-Instruct, and Alpaca -- on a set of 13 real-world clinical and biomedical natural language processing (NLP) tasks in English, such as named-entity recognition (NER), question-answering (QA), relation extraction (RE), etc. Our overall results demonstrate that the evaluated LLMs begin to approach performance of state-of-the-art models in zero- and few-shot scenarios for most tasks, and particularly well for the QA task, even …

abstract alpaca art arxiv biomedical chatgpt clinical cs.ai cs.cl cs.lg english few-shot instruction-tuned language language models language processing large language large language models llms natural natural language natural language processing ner nlp processing question recognition set state study tasks type world zero-shot

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US