June 11, 2024, 4:42 a.m. | Zifeng Cheng, Zhaoling Chen, Zhiwei Jiang, Yafeng Yin, Shiping Ge, Yuliang Liu, Qing Gu

cs.CL updates on arXiv.org arxiv.org

arXiv:2406.06279v1 Announce Type: new
Abstract: Recent Pre-trained Language Models (PLMs) usually only provide users with the inference APIs, namely the emerging Model-as-a-Service (MaaS) setting. To adapt MaaS PLMs to downstream tasks without accessing their parameters and gradients, some existing methods focus on the output-side adaptation of PLMs, viewing the PLM as an encoder and then optimizing a task-specific decoder for decoding the output hidden states and class scores of the PLM. Despite the effectiveness of these methods, they only use …

abstract adapt apis arxiv as-a-service cs.cl decoder encoder focus inference language language models language understanding maas output parameters prompting service tasks type understanding

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Data Engineer

@ Displate | Warsaw

Solutions Architect

@ PwC | Bucharest - 1A Poligrafiei Boulevard

Research Fellow (Social and Cognition Factors, CLIC)

@ Nanyang Technological University | NTU Main Campus, Singapore

Research Aide - Research Aide I - Department of Psychology

@ Cornell University | Ithaca (Main Campus)

Technical Architect - SMB/Desk

@ Salesforce | Ireland - Dublin