all AI news
Enabling Natural Zero-Shot Prompting on Encoder Models via Statement-Tuning
April 22, 2024, 4:46 a.m. | Ahmed Elshabrawy, Yongix Huang, Iryna Gurevych, Alham Fikri Aji
cs.CL updates on arXiv.org arxiv.org
Abstract: While Large Language Models (LLMs) exhibit remarkable capabilities in zero-shot and few-shot scenarios, they often require computationally prohibitive sizes. Conversely, smaller Masked Language Models (MLMs) like BERT and RoBERTa achieve state-of-the-art results through fine-tuning but struggle with extending to few-shot and zero-shot settings due to their architectural constraints. Hence, we propose Statement-Tuning, a technique that models discriminative tasks as a set of finite statements and trains an Encoder model to discriminate between the potential statements …
abstract art arxiv bert capabilities cs.cl enabling encoder few-shot fine-tuning language language models large language large language models llms natural prompting results roberta state struggle through type via zero-shot
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Data Engineer (m/f/d)
@ Project A Ventures | Berlin, Germany
Principle Research Scientist
@ Analog Devices | US, MA, Boston