March 22, 2024, 4:43 a.m. | Weisen Jiang, Yu Zhang, James T. Kwok

cs.LG updates on arXiv.org arxiv.org

arXiv:2306.00618v2 Announce Type: replace-cross
Abstract: Prompt tuning for pre-trained masked language models (MLM) has shown promising performance in natural language processing tasks with few labeled examples. It tunes a prompt for the downstream task, and a verbalizer is used to bridge the predicted token and label prediction. Due to the limited training data, prompt initialization is crucial for prompt tuning. Recently, MetaPrompting (Hou et al., 2022) uses meta-learning to learn a shared initialization for all task-specific prompts. However, a single …

abstract arxiv bridge cs.ai cs.cl cs.lg examples language language models language processing meta meta-learning natural natural language natural language processing performance prediction processing prompt prompting prompt tuning tasks token training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City