March 7, 2024, 5:42 a.m. | Rishabh Adiga, Lakshminarayanan Subramanian, Varun Chandrasekaran

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.03861v1 Announce Type: cross
Abstract: Pretrained language models (PLMs) have shown remarkable few-shot learning capabilities when provided with properly formatted examples. However, selecting the "best" examples remains an open challenge. We propose a complexity-based prompt selection approach for sequence tagging tasks. This approach avoids the training of a dedicated model for selection of examples, and instead uses certain metrics to align the syntactico-semantic complexity of test sentences and examples. We use both sentence- and word-level metrics to match the complexity …

abstract arxiv capabilities challenge complexity cs.cl cs.lg designing example examples few-shot few-shot learning however language language models metrics prompt tagging tasks training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South